Test Report: QEMU_macOS 19740

                    
                      f4f6e0076e771cedcca340e072cd1813dc91a89c:2024-10-01:36461
                    
                

Test fail (100/273)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 39.82
7 TestDownloadOnly/v1.20.0/kubectl 0
21 TestBinaryMirror 0.27
22 TestOffline 10.13
45 TestCertOptions 12.31
46 TestCertExpiration 197.87
47 TestDockerFlags 12.81
48 TestForceSystemdFlag 12.02
49 TestForceSystemdEnv 10.48
94 TestFunctional/parallel/ServiceCmdConnect 30.05
166 TestMultiControlPlane/serial/StopSecondaryNode 162.29
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 150.12
168 TestMultiControlPlane/serial/RestartSecondaryNode 185.33
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 150.11
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 332.57
171 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
173 TestMultiControlPlane/serial/StopCluster 300.23
174 TestMultiControlPlane/serial/RestartCluster 5.26
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
176 TestMultiControlPlane/serial/AddSecondaryNode 0.07
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
180 TestImageBuild/serial/Setup 10.05
183 TestJSONOutput/start/Command 9.82
189 TestJSONOutput/pause/Command 0.08
195 TestJSONOutput/unpause/Command 0.05
212 TestMinikubeProfile 10.21
215 TestMountStart/serial/StartWithMountFirst 10.62
218 TestMultiNode/serial/FreshStart2Nodes 10.01
219 TestMultiNode/serial/DeployApp2Nodes 113.62
220 TestMultiNode/serial/PingHostFrom2Pods 0.09
221 TestMultiNode/serial/AddNode 0.07
222 TestMultiNode/serial/MultiNodeLabels 0.06
223 TestMultiNode/serial/ProfileList 0.08
224 TestMultiNode/serial/CopyFile 0.06
225 TestMultiNode/serial/StopNode 0.13
226 TestMultiNode/serial/StartAfterStop 39.05
227 TestMultiNode/serial/RestartKeepsNodes 8.77
228 TestMultiNode/serial/DeleteNode 0.1
229 TestMultiNode/serial/StopMultiNode 3.17
230 TestMultiNode/serial/RestartMultiNode 5.25
231 TestMultiNode/serial/ValidateNameConflict 20.37
235 TestPreload 9.98
237 TestScheduledStopUnix 10.08
238 TestSkaffold 16.44
241 TestRunningBinaryUpgrade 708.87
243 TestKubernetesUpgrade 18.29
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.05
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.72
259 TestStoppedBinaryUpgrade/Upgrade 580.59
261 TestPause/serial/Start 10
271 TestNoKubernetes/serial/StartWithK8s 9.85
272 TestNoKubernetes/serial/StartWithStopK8s 6.32
273 TestNoKubernetes/serial/Start 6.38
277 TestNoKubernetes/serial/StartNoArgs 6.37
279 TestNetworkPlugins/group/auto/Start 9.88
281 TestNetworkPlugins/group/kindnet/Start 9.97
282 TestNetworkPlugins/group/calico/Start 10.18
283 TestNetworkPlugins/group/custom-flannel/Start 10.03
284 TestNetworkPlugins/group/false/Start 11.49
285 TestNetworkPlugins/group/enable-default-cni/Start 9.94
286 TestNetworkPlugins/group/flannel/Start 10.08
287 TestNetworkPlugins/group/bridge/Start 9.89
288 TestNetworkPlugins/group/kubenet/Start 9.83
290 TestStartStop/group/old-k8s-version/serial/FirstStart 10.08
292 TestStartStop/group/no-preload/serial/FirstStart 10.11
293 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
294 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.13
296 TestStartStop/group/no-preload/serial/DeployApp 0.09
297 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
300 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
302 TestStartStop/group/no-preload/serial/SecondStart 5.61
303 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
304 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
305 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
306 TestStartStop/group/old-k8s-version/serial/Pause 0.1
308 TestStartStop/group/embed-certs/serial/FirstStart 10.03
309 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
310 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
311 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
312 TestStartStop/group/no-preload/serial/Pause 0.1
314 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.84
315 TestStartStop/group/embed-certs/serial/DeployApp 0.09
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
318 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
322 TestStartStop/group/embed-certs/serial/SecondStart 5.25
324 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.32
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
328 TestStartStop/group/embed-certs/serial/Pause 0.1
330 TestStartStop/group/newest-cni/serial/FirstStart 9.96
331 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
332 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
333 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
339 TestStartStop/group/newest-cni/serial/SecondStart 5.26
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
343 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (39.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-065000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-065000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (39.822911208s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"36ff867f-a36a-4859-ad80-61632f0344ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-065000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ef622d1-6ccd-425a-8d3c-f32f551e8793","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19740"}}
	{"specversion":"1.0","id":"6b7e670c-7d71-4587-b0c1-29d1279a435a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig"}}
	{"specversion":"1.0","id":"add88860-07eb-4941-a73c-d300d2293901","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"07f062fe-6cd1-4116-a4d1-22093ae96da3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"15df7f1f-ce4e-4861-9ce9-950dcb2131c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube"}}
	{"specversion":"1.0","id":"4b2e2c7c-3008-44c7-9249-098ee6b8a9df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"f79218a3-30e4-425f-984a-963b99966214","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c65812b-000a-44bd-bcf5-f31b05f974fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"e345bf74-b998-4989-9ec6-155cf195934f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d8357325-b7ac-401e-b0c5-fc1ada0f9962","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-065000\" primary control-plane node in \"download-only-065000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"37cfeab3-5b56-476b-be77-7772f35e54e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"df6c76dc-1437-438f-9e56-f943841d631e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1088256c0 0x1088256c0 0x1088256c0 0x1088256c0 0x1088256c0 0x1088256c0 0x1088256c0] Decompressors:map[bz2:0x140004878a0 gz:0x140004878a8 tar:0x14000487850 tar.bz2:0x14000487860 tar.gz:0x14000487870 tar.xz:0x14000487880 tar.zst:0x14000487890 tbz2:0x14000487860 tgz:0x14
000487870 txz:0x14000487880 tzst:0x14000487890 xz:0x140004878d0 zip:0x140004878e0 zst:0x140004878d8] Getters:map[file:0x1400093a780 http:0x14000980910 https:0x14000980960] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"f18dc32a-ee5f-435e-99b3-d0ca22f9bba4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 15:46:47.212069    1660 out.go:345] Setting OutFile to fd 1 ...
	I1001 15:46:47.212213    1660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 15:46:47.212216    1660 out.go:358] Setting ErrFile to fd 2...
	I1001 15:46:47.212219    1660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 15:46:47.212350    1660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	W1001 15:46:47.212430    1660 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19740-1141/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19740-1141/.minikube/config/config.json: no such file or directory
	I1001 15:46:47.213735    1660 out.go:352] Setting JSON to true
	I1001 15:46:47.231014    1660 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":975,"bootTime":1727821832,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 15:46:47.231081    1660 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 15:46:47.235835    1660 out.go:97] [download-only-065000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 15:46:47.235983    1660 notify.go:220] Checking for updates...
	W1001 15:46:47.236070    1660 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball: no such file or directory
	I1001 15:46:47.239537    1660 out.go:169] MINIKUBE_LOCATION=19740
	I1001 15:46:47.242642    1660 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 15:46:47.247517    1660 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 15:46:47.250672    1660 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 15:46:47.254665    1660 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	W1001 15:46:47.258677    1660 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 15:46:47.258916    1660 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 15:46:47.263695    1660 out.go:97] Using the qemu2 driver based on user configuration
	I1001 15:46:47.263715    1660 start.go:297] selected driver: qemu2
	I1001 15:46:47.263731    1660 start.go:901] validating driver "qemu2" against <nil>
	I1001 15:46:47.263825    1660 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 15:46:47.267693    1660 out.go:169] Automatically selected the socket_vmnet network
	I1001 15:46:47.273115    1660 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1001 15:46:47.273219    1660 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 15:46:47.273264    1660 cni.go:84] Creating CNI manager for ""
	I1001 15:46:47.273298    1660 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1001 15:46:47.273343    1660 start.go:340] cluster config:
	{Name:download-only-065000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-065000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 15:46:47.277012    1660 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 15:46:47.280719    1660 out.go:97] Downloading VM boot image ...
	I1001 15:46:47.280740    1660 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I1001 15:47:05.035724    1660 out.go:97] Starting "download-only-065000" primary control-plane node in "download-only-065000" cluster
	I1001 15:47:05.035745    1660 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 15:47:05.315497    1660 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1001 15:47:05.315584    1660 cache.go:56] Caching tarball of preloaded images
	I1001 15:47:05.316447    1660 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 15:47:05.321918    1660 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1001 15:47:05.321944    1660 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1001 15:47:05.889505    1660 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1001 15:47:25.400974    1660 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1001 15:47:25.401157    1660 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1001 15:47:26.096493    1660 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1001 15:47:26.096699    1660 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/download-only-065000/config.json ...
	I1001 15:47:26.096720    1660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/download-only-065000/config.json: {Name:mke95ab2104e60b276ab470a74508d6d1fa617da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 15:47:26.096973    1660 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 15:47:26.097168    1660 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1001 15:47:26.957027    1660 out.go:193] 
	W1001 15:47:26.962015    1660 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1088256c0 0x1088256c0 0x1088256c0 0x1088256c0 0x1088256c0 0x1088256c0 0x1088256c0] Decompressors:map[bz2:0x140004878a0 gz:0x140004878a8 tar:0x14000487850 tar.bz2:0x14000487860 tar.gz:0x14000487870 tar.xz:0x14000487880 tar.zst:0x14000487890 tbz2:0x14000487860 tgz:0x14000487870 txz:0x14000487880 tzst:0x14000487890 xz:0x140004878d0 zip:0x140004878e0 zst:0x140004878d8] Getters:map[file:0x1400093a780 http:0x14000980910 https:0x14000980960] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1001 15:47:26.962046    1660 out_reason.go:110] 
	W1001 15:47:26.969923    1660 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 15:47:26.972907    1660 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-065000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (39.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestBinaryMirror (0.27s)

                                                
                                                
=== RUN   TestBinaryMirror
I1001 15:47:46.383366    1659 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-787000 --alsologtostderr --binary-mirror http://127.0.0.1:49313 --driver=qemu2 
aaa_download_only_test.go:314: (dbg) Non-zero exit: out/minikube-darwin-arm64 start --download-only -p binary-mirror-787000 --alsologtostderr --binary-mirror http://127.0.0.1:49313 --driver=qemu2 : exit status 40 (171.155458ms)

                                                
                                                
-- stdout --
	* [binary-mirror-787000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "binary-mirror-787000" primary control-plane node in "binary-mirror-787000" cluster
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 15:47:46.442793    1728 out.go:345] Setting OutFile to fd 1 ...
	I1001 15:47:46.442923    1728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 15:47:46.442926    1728 out.go:358] Setting ErrFile to fd 2...
	I1001 15:47:46.442929    1728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 15:47:46.443049    1728 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 15:47:46.444114    1728 out.go:352] Setting JSON to false
	I1001 15:47:46.460130    1728 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1034,"bootTime":1727821832,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 15:47:46.460206    1728 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 15:47:46.465812    1728 out.go:177] * [binary-mirror-787000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 15:47:46.471750    1728 notify.go:220] Checking for updates...
	I1001 15:47:46.474738    1728 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 15:47:46.477693    1728 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 15:47:46.480733    1728 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 15:47:46.484677    1728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 15:47:46.487740    1728 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 15:47:46.490967    1728 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 15:47:46.494788    1728 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 15:47:46.501713    1728 start.go:297] selected driver: qemu2
	I1001 15:47:46.501720    1728 start.go:901] validating driver "qemu2" against <nil>
	I1001 15:47:46.501776    1728 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 15:47:46.504791    1728 out.go:177] * Automatically selected the socket_vmnet network
	I1001 15:47:46.508119    1728 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1001 15:47:46.508221    1728 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 15:47:46.508245    1728 cni.go:84] Creating CNI manager for ""
	I1001 15:47:46.508270    1728 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 15:47:46.508277    1728 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 15:47:46.508322    1728 start.go:340] cluster config:
	{Name:binary-mirror-787000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:binary-mirror-787000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:http://127.0.0.1:49313 DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket
_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 15:47:46.511873    1728 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 15:47:46.519743    1728 out.go:177] * Starting "binary-mirror-787000" primary control-plane node in "binary-mirror-787000" cluster
	I1001 15:47:46.523736    1728 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 15:47:46.523752    1728 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 15:47:46.523771    1728 cache.go:56] Caching tarball of preloaded images
	I1001 15:47:46.523825    1728 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 15:47:46.523835    1728 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 15:47:46.524040    1728 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/binary-mirror-787000/config.json ...
	I1001 15:47:46.524051    1728 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/binary-mirror-787000/config.json: {Name:mk533cc1ba054c0f1c5cc5e381553b74148cc465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 15:47:46.524378    1728 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 15:47:46.524428    1728 download.go:107] Downloading: http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/darwin/arm64/v1.31.1/kubectl
	I1001 15:47:46.557801    1728 out.go:201] 
	W1001 15:47:46.561717    1728 out.go:270] X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1093096c0 0x1093096c0 0x1093096c0 0x1093096c0 0x1093096c0 0x1093096c0 0x1093096c0] Decompressors:map[bz2:0x140004f9d70 gz:0x140004f9d78 tar:0x140004f9cb0 tar.bz2:0x140004f9cc0 tar.gz:0x140004f9d00 tar.xz:0x140004f9d10 tar.zst:0x140004f9d20 tbz2:0x140004f9cc0 tgz:0x140004f9d00 txz:0x140004f9d10 tzst:0x140004f9d20 xz:0x140004f9d80 zip:0x140004f9d90 zst:0x140004f9d88] Getters:map[file:0x14000ae8110 http:0x14000baa320 https:0x14000baa370] Dir:
false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: unexpected EOF
	X Exiting due to INET_CACHE_KUBECTL: Failed to cache kubectl: download failed: http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:http://127.0.0.1:49313/v1.31.1/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/darwin/arm64/v1.31.1/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1093096c0 0x1093096c0 0x1093096c0 0x1093096c0 0x1093096c0 0x1093096c0 0x1093096c0] Decompressors:map[bz2:0x140004f9d70 gz:0x140004f9d78 tar:0x140004f9cb0 tar.bz2:0x140004f9cc0 tar.gz:0x140004f9d00 tar.xz:0x140004f9d10 tar.zst:0x140004f9d20 tbz2:0x140004f9cc0 tgz:0x140004f9d00 txz:0x140004f9d10 tzst:0x140004f9d20 xz:0x140004f9d80 zip:0x140004f9d90 zst:0x140004f9d88] Getters:map[file:0x14000ae8110 http:0x14000baa320 https:0x14000baa370] Dir:false ProgressListener:<nil> Insecure:fals
e DisableSymlinks:false Options:[]}: unexpected EOF
	W1001 15:47:46.561722    1728 out.go:270] * 
	* 
	W1001 15:47:46.562258    1728 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 15:47:46.580847    1728 out.go:201] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:315: start with --binary-mirror failed ["start" "--download-only" "-p" "binary-mirror-787000" "--alsologtostderr" "--binary-mirror" "http://127.0.0.1:49313" "--driver=qemu2" ""] : exit status 40
helpers_test.go:175: Cleaning up "binary-mirror-787000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-787000
--- FAIL: TestBinaryMirror (0.27s)

                                                
                                    
x
+
TestOffline (10.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-599000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-599000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.995986875s)

                                                
                                                
-- stdout --
	* [offline-docker-599000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-599000" primary control-plane node in "offline-docker-599000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-599000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:40:02.778700    4502 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:40:02.778852    4502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:40:02.778855    4502 out.go:358] Setting ErrFile to fd 2...
	I1001 16:40:02.778857    4502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:40:02.778984    4502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:40:02.780142    4502 out.go:352] Setting JSON to false
	I1001 16:40:02.797615    4502 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4170,"bootTime":1727821832,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:40:02.797686    4502 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:40:02.804734    4502 out.go:177] * [offline-docker-599000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:40:02.811755    4502 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:40:02.811793    4502 notify.go:220] Checking for updates...
	I1001 16:40:02.816660    4502 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:40:02.819679    4502 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:40:02.822664    4502 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:40:02.825675    4502 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:40:02.828680    4502 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:40:02.832062    4502 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:40:02.832114    4502 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:40:02.835615    4502 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:40:02.842695    4502 start.go:297] selected driver: qemu2
	I1001 16:40:02.842706    4502 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:40:02.842715    4502 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:40:02.844533    4502 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:40:02.847592    4502 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:40:02.850703    4502 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:40:02.850722    4502 cni.go:84] Creating CNI manager for ""
	I1001 16:40:02.850756    4502 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:40:02.850760    4502 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 16:40:02.850798    4502 start.go:340] cluster config:
	{Name:offline-docker-599000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-599000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/b
in/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:40:02.854510    4502 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:40:02.859671    4502 out.go:177] * Starting "offline-docker-599000" primary control-plane node in "offline-docker-599000" cluster
	I1001 16:40:02.863601    4502 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:40:02.863629    4502 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:40:02.863644    4502 cache.go:56] Caching tarball of preloaded images
	I1001 16:40:02.863714    4502 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:40:02.863718    4502 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:40:02.863802    4502 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/offline-docker-599000/config.json ...
	I1001 16:40:02.863814    4502 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/offline-docker-599000/config.json: {Name:mk1b90281b85c73fe0f6595bcff766a51b0018c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:40:02.864111    4502 start.go:360] acquireMachinesLock for offline-docker-599000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:40:02.864151    4502 start.go:364] duration metric: took 31.292µs to acquireMachinesLock for "offline-docker-599000"
	I1001 16:40:02.864164    4502 start.go:93] Provisioning new machine with config: &{Name:offline-docker-599000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-599000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:40:02.864201    4502 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:40:02.868645    4502 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1001 16:40:02.884565    4502 start.go:159] libmachine.API.Create for "offline-docker-599000" (driver="qemu2")
	I1001 16:40:02.884595    4502 client.go:168] LocalClient.Create starting
	I1001 16:40:02.884702    4502 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:40:02.884735    4502 main.go:141] libmachine: Decoding PEM data...
	I1001 16:40:02.884745    4502 main.go:141] libmachine: Parsing certificate...
	I1001 16:40:02.884785    4502 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:40:02.884810    4502 main.go:141] libmachine: Decoding PEM data...
	I1001 16:40:02.884818    4502 main.go:141] libmachine: Parsing certificate...
	I1001 16:40:02.885179    4502 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:40:03.045913    4502 main.go:141] libmachine: Creating SSH key...
	I1001 16:40:03.222805    4502 main.go:141] libmachine: Creating Disk image...
	I1001 16:40:03.222815    4502 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:40:03.223187    4502 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/disk.qcow2
	I1001 16:40:03.239237    4502 main.go:141] libmachine: STDOUT: 
	I1001 16:40:03.239260    4502 main.go:141] libmachine: STDERR: 
	I1001 16:40:03.239350    4502 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/disk.qcow2 +20000M
	I1001 16:40:03.249454    4502 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:40:03.249476    4502 main.go:141] libmachine: STDERR: 
	I1001 16:40:03.249492    4502 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/disk.qcow2
	I1001 16:40:03.249499    4502 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:40:03.249514    4502 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:40:03.249547    4502 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:f2:e8:4d:18:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/disk.qcow2
	I1001 16:40:03.251358    4502 main.go:141] libmachine: STDOUT: 
	I1001 16:40:03.251384    4502 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:40:03.251405    4502 client.go:171] duration metric: took 366.807292ms to LocalClient.Create
	I1001 16:40:05.253466    4502 start.go:128] duration metric: took 2.389276375s to createHost
	I1001 16:40:05.253500    4502 start.go:83] releasing machines lock for "offline-docker-599000", held for 2.389368333s
	W1001 16:40:05.253520    4502 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:40:05.279894    4502 out.go:177] * Deleting "offline-docker-599000" in qemu2 ...
	W1001 16:40:05.293308    4502 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:40:05.293315    4502 start.go:729] Will try again in 5 seconds ...
	I1001 16:40:10.295376    4502 start.go:360] acquireMachinesLock for offline-docker-599000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:40:10.295496    4502 start.go:364] duration metric: took 85.959µs to acquireMachinesLock for "offline-docker-599000"
	I1001 16:40:10.295523    4502 start.go:93] Provisioning new machine with config: &{Name:offline-docker-599000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-599000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:40:10.295557    4502 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:40:10.307694    4502 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1001 16:40:10.323194    4502 start.go:159] libmachine.API.Create for "offline-docker-599000" (driver="qemu2")
	I1001 16:40:10.323226    4502 client.go:168] LocalClient.Create starting
	I1001 16:40:10.323297    4502 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:40:10.323333    4502 main.go:141] libmachine: Decoding PEM data...
	I1001 16:40:10.323341    4502 main.go:141] libmachine: Parsing certificate...
	I1001 16:40:10.323377    4502 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:40:10.323400    4502 main.go:141] libmachine: Decoding PEM data...
	I1001 16:40:10.323411    4502 main.go:141] libmachine: Parsing certificate...
	I1001 16:40:10.323766    4502 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:40:10.485750    4502 main.go:141] libmachine: Creating SSH key...
	I1001 16:40:10.658498    4502 main.go:141] libmachine: Creating Disk image...
	I1001 16:40:10.658510    4502 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:40:10.658807    4502 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/disk.qcow2
	I1001 16:40:10.668549    4502 main.go:141] libmachine: STDOUT: 
	I1001 16:40:10.668580    4502 main.go:141] libmachine: STDERR: 
	I1001 16:40:10.668664    4502 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/disk.qcow2 +20000M
	I1001 16:40:10.677558    4502 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:40:10.677579    4502 main.go:141] libmachine: STDERR: 
	I1001 16:40:10.677592    4502 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/disk.qcow2
	I1001 16:40:10.677607    4502 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:40:10.677617    4502 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:40:10.677648    4502 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:cf:75:57:d8:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/offline-docker-599000/disk.qcow2
	I1001 16:40:10.679643    4502 main.go:141] libmachine: STDOUT: 
	I1001 16:40:10.679696    4502 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:40:10.679716    4502 client.go:171] duration metric: took 356.489459ms to LocalClient.Create
	I1001 16:40:12.681955    4502 start.go:128] duration metric: took 2.38639s to createHost
	I1001 16:40:12.682041    4502 start.go:83] releasing machines lock for "offline-docker-599000", held for 2.386559417s
	W1001 16:40:12.682440    4502 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-599000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-599000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:40:12.712007    4502 out.go:201] 
	W1001 16:40:12.715822    4502 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:40:12.715879    4502 out.go:270] * 
	* 
	W1001 16:40:12.727910    4502 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:40:12.732829    4502 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-599000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-10-01 16:40:12.74558 -0700 PDT m=+3205.635134834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-599000 -n offline-docker-599000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-599000 -n offline-docker-599000: exit status 7 (53.425917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-599000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-599000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-599000
--- FAIL: TestOffline (10.13s)

                                                
                                    
x
+
TestCertOptions (12.31s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-774000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-774000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (12.048419583s)

                                                
                                                
-- stdout --
	* [cert-options-774000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-774000" primary control-plane node in "cert-options-774000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-774000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-774000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-774000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-774000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-774000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (78.558083ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-774000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-774000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-774000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-774000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-774000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-774000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.464458ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-774000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-774000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-774000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-774000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-774000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-10-01 16:40:48.378102 -0700 PDT m=+3241.268022209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-774000 -n cert-options-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-774000 -n cert-options-774000: exit status 7 (30.295041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-774000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-774000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-774000
--- FAIL: TestCertOptions (12.31s)

                                                
                                    
x
+
TestCertExpiration (197.87s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-161000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-161000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.564321s)

                                                
                                                
-- stdout --
	* [cert-expiration-161000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-161000" primary control-plane node in "cert-expiration-161000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-161000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-161000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-161000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-161000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-161000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.182415208s)

                                                
                                                
-- stdout --
	* [cert-expiration-161000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-161000" primary control-plane node in "cert-expiration-161000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-161000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-161000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-161000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-161000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-161000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-161000" primary control-plane node in "cert-expiration-161000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-161000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-161000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-161000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-10-01 16:43:50.98575 -0700 PDT m=+3423.877543917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-161000 -n cert-expiration-161000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-161000 -n cert-expiration-161000: exit status 7 (39.580208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-161000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-161000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-161000
--- FAIL: TestCertExpiration (197.87s)

                                                
                                    
x
+
TestDockerFlags (12.81s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-434000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-434000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.314569625s)

                                                
                                                
-- stdout --
	* [docker-flags-434000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-434000" primary control-plane node in "docker-flags-434000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-434000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:40:23.394272    4693 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:40:23.394433    4693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:40:23.394445    4693 out.go:358] Setting ErrFile to fd 2...
	I1001 16:40:23.394448    4693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:40:23.394617    4693 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:40:23.396065    4693 out.go:352] Setting JSON to false
	I1001 16:40:23.415295    4693 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4191,"bootTime":1727821832,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:40:23.415365    4693 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:40:23.460088    4693 out.go:177] * [docker-flags-434000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:40:23.467911    4693 notify.go:220] Checking for updates...
	I1001 16:40:23.473853    4693 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:40:23.485778    4693 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:40:23.491764    4693 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:40:23.501813    4693 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:40:23.505922    4693 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:40:23.508844    4693 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:40:23.512230    4693 config.go:182] Loaded profile config "force-systemd-flag-173000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:40:23.512292    4693 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:40:23.512341    4693 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:40:23.516832    4693 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:40:23.523794    4693 start.go:297] selected driver: qemu2
	I1001 16:40:23.523800    4693 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:40:23.523807    4693 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:40:23.525998    4693 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:40:23.528849    4693 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:40:23.531882    4693 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1001 16:40:23.531904    4693 cni.go:84] Creating CNI manager for ""
	I1001 16:40:23.531935    4693 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:40:23.531939    4693 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 16:40:23.531968    4693 start.go:340] cluster config:
	{Name:docker-flags-434000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:40:23.535763    4693 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:40:23.542872    4693 out.go:177] * Starting "docker-flags-434000" primary control-plane node in "docker-flags-434000" cluster
	I1001 16:40:23.546717    4693 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:40:23.546739    4693 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:40:23.546747    4693 cache.go:56] Caching tarball of preloaded images
	I1001 16:40:23.546797    4693 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:40:23.546803    4693 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:40:23.546858    4693 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/docker-flags-434000/config.json ...
	I1001 16:40:23.546869    4693 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/docker-flags-434000/config.json: {Name:mkddb3b0d2105694ed46eca5b701770bdbfaf3f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:40:23.547247    4693 start.go:360] acquireMachinesLock for docker-flags-434000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:40:25.506394    4693 start.go:364] duration metric: took 1.959115917s to acquireMachinesLock for "docker-flags-434000"
	I1001 16:40:25.506496    4693 start.go:93] Provisioning new machine with config: &{Name:docker-flags-434000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKe
y: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:40:25.506704    4693 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:40:25.516208    4693 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1001 16:40:25.564913    4693 start.go:159] libmachine.API.Create for "docker-flags-434000" (driver="qemu2")
	I1001 16:40:25.564967    4693 client.go:168] LocalClient.Create starting
	I1001 16:40:25.565106    4693 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:40:25.565165    4693 main.go:141] libmachine: Decoding PEM data...
	I1001 16:40:25.565188    4693 main.go:141] libmachine: Parsing certificate...
	I1001 16:40:25.565256    4693 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:40:25.565303    4693 main.go:141] libmachine: Decoding PEM data...
	I1001 16:40:25.565318    4693 main.go:141] libmachine: Parsing certificate...
	I1001 16:40:25.566027    4693 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:40:25.743874    4693 main.go:141] libmachine: Creating SSH key...
	I1001 16:40:25.849386    4693 main.go:141] libmachine: Creating Disk image...
	I1001 16:40:25.849392    4693 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:40:25.849582    4693 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/disk.qcow2
	I1001 16:40:25.858935    4693 main.go:141] libmachine: STDOUT: 
	I1001 16:40:25.858963    4693 main.go:141] libmachine: STDERR: 
	I1001 16:40:25.859026    4693 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/disk.qcow2 +20000M
	I1001 16:40:25.866767    4693 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:40:25.866781    4693 main.go:141] libmachine: STDERR: 
	I1001 16:40:25.866801    4693 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/disk.qcow2
	I1001 16:40:25.866806    4693 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:40:25.866819    4693 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:40:25.866846    4693 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:9c:ed:58:37:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/disk.qcow2
	I1001 16:40:25.868464    4693 main.go:141] libmachine: STDOUT: 
	I1001 16:40:25.868481    4693 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:40:25.868502    4693 client.go:171] duration metric: took 303.531417ms to LocalClient.Create
	I1001 16:40:27.870660    4693 start.go:128] duration metric: took 2.363947291s to createHost
	I1001 16:40:27.870705    4693 start.go:83] releasing machines lock for "docker-flags-434000", held for 2.364303583s
	W1001 16:40:27.870763    4693 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:40:27.891045    4693 out.go:177] * Deleting "docker-flags-434000" in qemu2 ...
	W1001 16:40:27.926126    4693 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:40:27.926145    4693 start.go:729] Will try again in 5 seconds ...
	I1001 16:40:32.927610    4693 start.go:360] acquireMachinesLock for docker-flags-434000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:40:32.927954    4693 start.go:364] duration metric: took 267.167µs to acquireMachinesLock for "docker-flags-434000"
	I1001 16:40:32.928064    4693 start.go:93] Provisioning new machine with config: &{Name:docker-flags-434000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKe
y: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-434000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:40:32.928284    4693 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:40:32.933082    4693 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1001 16:40:32.973678    4693 start.go:159] libmachine.API.Create for "docker-flags-434000" (driver="qemu2")
	I1001 16:40:32.973739    4693 client.go:168] LocalClient.Create starting
	I1001 16:40:32.973831    4693 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:40:32.973895    4693 main.go:141] libmachine: Decoding PEM data...
	I1001 16:40:32.973911    4693 main.go:141] libmachine: Parsing certificate...
	I1001 16:40:32.973987    4693 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:40:32.974018    4693 main.go:141] libmachine: Decoding PEM data...
	I1001 16:40:32.974030    4693 main.go:141] libmachine: Parsing certificate...
	I1001 16:40:32.974485    4693 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:40:33.472542    4693 main.go:141] libmachine: Creating SSH key...
	I1001 16:40:33.605526    4693 main.go:141] libmachine: Creating Disk image...
	I1001 16:40:33.605532    4693 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:40:33.605740    4693 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/disk.qcow2
	I1001 16:40:33.615547    4693 main.go:141] libmachine: STDOUT: 
	I1001 16:40:33.615568    4693 main.go:141] libmachine: STDERR: 
	I1001 16:40:33.615640    4693 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/disk.qcow2 +20000M
	I1001 16:40:33.623510    4693 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:40:33.623529    4693 main.go:141] libmachine: STDERR: 
	I1001 16:40:33.623545    4693 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/disk.qcow2
	I1001 16:40:33.623558    4693 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:40:33.623568    4693 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:40:33.623600    4693 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:c5:18:22:fe:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/docker-flags-434000/disk.qcow2
	I1001 16:40:33.625252    4693 main.go:141] libmachine: STDOUT: 
	I1001 16:40:33.625267    4693 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:40:33.625279    4693 client.go:171] duration metric: took 651.54175ms to LocalClient.Create
	I1001 16:40:35.627443    4693 start.go:128] duration metric: took 2.699158541s to createHost
	I1001 16:40:35.627502    4693 start.go:83] releasing machines lock for "docker-flags-434000", held for 2.699555417s
	W1001 16:40:35.627962    4693 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-434000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-434000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:40:35.643810    4693 out.go:201] 
	W1001 16:40:35.648887    4693 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:40:35.648926    4693 out.go:270] * 
	* 
	W1001 16:40:35.651025    4693 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:40:35.662777    4693 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-434000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-434000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-434000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (84.61475ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-434000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-434000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-434000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-434000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-434000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-434000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-434000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-434000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-434000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (97.196625ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-434000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-434000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-434000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-434000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-434000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-434000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-10-01 16:40:35.859199 -0700 PDT m=+3228.748990417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-434000 -n docker-flags-434000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-434000 -n docker-flags-434000: exit status 7 (34.296792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-434000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-434000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-434000
--- FAIL: TestDockerFlags (12.81s)

                                                
                                    
x
+
TestForceSystemdFlag (12.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-173000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-173000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.70385325s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-173000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-173000" primary control-plane node in "force-systemd-flag-173000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-173000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:40:21.244171    4679 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:40:21.244293    4679 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:40:21.244296    4679 out.go:358] Setting ErrFile to fd 2...
	I1001 16:40:21.244298    4679 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:40:21.244418    4679 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:40:21.245491    4679 out.go:352] Setting JSON to false
	I1001 16:40:21.261562    4679 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4189,"bootTime":1727821832,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:40:21.261631    4679 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:40:21.284892    4679 out.go:177] * [force-systemd-flag-173000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:40:21.295249    4679 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:40:21.295281    4679 notify.go:220] Checking for updates...
	I1001 16:40:21.308170    4679 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:40:21.312218    4679 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:40:21.315262    4679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:40:21.318280    4679 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:40:21.321234    4679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:40:21.324667    4679 config.go:182] Loaded profile config "force-systemd-env-845000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:40:21.324744    4679 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:40:21.324795    4679 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:40:21.329175    4679 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:40:21.336271    4679 start.go:297] selected driver: qemu2
	I1001 16:40:21.336278    4679 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:40:21.336286    4679 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:40:21.339001    4679 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:40:21.342192    4679 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:40:21.345365    4679 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 16:40:21.345381    4679 cni.go:84] Creating CNI manager for ""
	I1001 16:40:21.345419    4679 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:40:21.345426    4679 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 16:40:21.345461    4679 start.go:340] cluster config:
	{Name:force-systemd-flag-173000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-173000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:40:21.349846    4679 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:40:21.355186    4679 out.go:177] * Starting "force-systemd-flag-173000" primary control-plane node in "force-systemd-flag-173000" cluster
	I1001 16:40:21.359250    4679 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:40:21.359268    4679 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:40:21.359278    4679 cache.go:56] Caching tarball of preloaded images
	I1001 16:40:21.359370    4679 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:40:21.359377    4679 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:40:21.359469    4679 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/force-systemd-flag-173000/config.json ...
	I1001 16:40:21.359483    4679 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/force-systemd-flag-173000/config.json: {Name:mk6ab7e0ef1d6780a2b037b5790a82aa67f31422 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:40:21.359769    4679 start.go:360] acquireMachinesLock for force-systemd-flag-173000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:40:22.995269    4679 start.go:364] duration metric: took 1.635434959s to acquireMachinesLock for "force-systemd-flag-173000"
	I1001 16:40:22.995455    4679 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-173000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-173000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:40:22.995679    4679 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:40:23.003824    4679 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1001 16:40:23.053766    4679 start.go:159] libmachine.API.Create for "force-systemd-flag-173000" (driver="qemu2")
	I1001 16:40:23.053815    4679 client.go:168] LocalClient.Create starting
	I1001 16:40:23.053936    4679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:40:23.054005    4679 main.go:141] libmachine: Decoding PEM data...
	I1001 16:40:23.054021    4679 main.go:141] libmachine: Parsing certificate...
	I1001 16:40:23.054081    4679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:40:23.054133    4679 main.go:141] libmachine: Decoding PEM data...
	I1001 16:40:23.054151    4679 main.go:141] libmachine: Parsing certificate...
	I1001 16:40:23.054802    4679 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:40:23.406020    4679 main.go:141] libmachine: Creating SSH key...
	I1001 16:40:23.464905    4679 main.go:141] libmachine: Creating Disk image...
	I1001 16:40:23.464912    4679 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:40:23.465090    4679 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/disk.qcow2
	I1001 16:40:23.486809    4679 main.go:141] libmachine: STDOUT: 
	I1001 16:40:23.486830    4679 main.go:141] libmachine: STDERR: 
	I1001 16:40:23.486887    4679 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/disk.qcow2 +20000M
	I1001 16:40:23.502213    4679 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:40:23.502231    4679 main.go:141] libmachine: STDERR: 
	I1001 16:40:23.502252    4679 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/disk.qcow2
	I1001 16:40:23.502258    4679 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:40:23.502269    4679 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:40:23.502294    4679 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:40:8e:48:6f:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/disk.qcow2
	I1001 16:40:23.503901    4679 main.go:141] libmachine: STDOUT: 
	I1001 16:40:23.503916    4679 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:40:23.503936    4679 client.go:171] duration metric: took 450.119375ms to LocalClient.Create
	I1001 16:40:25.506111    4679 start.go:128] duration metric: took 2.510422417s to createHost
	I1001 16:40:25.506189    4679 start.go:83] releasing machines lock for "force-systemd-flag-173000", held for 2.510891833s
	W1001 16:40:25.506247    4679 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:40:25.523225    4679 out.go:177] * Deleting "force-systemd-flag-173000" in qemu2 ...
	W1001 16:40:25.548633    4679 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:40:25.548657    4679 start.go:729] Will try again in 5 seconds ...
	I1001 16:40:30.550833    4679 start.go:360] acquireMachinesLock for force-systemd-flag-173000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:40:30.551355    4679 start.go:364] duration metric: took 416.583µs to acquireMachinesLock for "force-systemd-flag-173000"
	I1001 16:40:30.551471    4679 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-173000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-173000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:40:30.551734    4679 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:40:30.569487    4679 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1001 16:40:30.618541    4679 start.go:159] libmachine.API.Create for "force-systemd-flag-173000" (driver="qemu2")
	I1001 16:40:30.618598    4679 client.go:168] LocalClient.Create starting
	I1001 16:40:30.618718    4679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:40:30.618790    4679 main.go:141] libmachine: Decoding PEM data...
	I1001 16:40:30.618805    4679 main.go:141] libmachine: Parsing certificate...
	I1001 16:40:30.618869    4679 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:40:30.618916    4679 main.go:141] libmachine: Decoding PEM data...
	I1001 16:40:30.618930    4679 main.go:141] libmachine: Parsing certificate...
	I1001 16:40:30.619490    4679 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:40:30.790414    4679 main.go:141] libmachine: Creating SSH key...
	I1001 16:40:30.853850    4679 main.go:141] libmachine: Creating Disk image...
	I1001 16:40:30.853855    4679 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:40:30.854090    4679 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/disk.qcow2
	I1001 16:40:30.863143    4679 main.go:141] libmachine: STDOUT: 
	I1001 16:40:30.863160    4679 main.go:141] libmachine: STDERR: 
	I1001 16:40:30.863218    4679 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/disk.qcow2 +20000M
	I1001 16:40:30.871152    4679 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:40:30.871163    4679 main.go:141] libmachine: STDERR: 
	I1001 16:40:30.871174    4679 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/disk.qcow2
	I1001 16:40:30.871179    4679 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:40:30.871205    4679 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:40:30.871229    4679 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:75:fc:f8:31:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-flag-173000/disk.qcow2
	I1001 16:40:30.872814    4679 main.go:141] libmachine: STDOUT: 
	I1001 16:40:30.872828    4679 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:40:30.872840    4679 client.go:171] duration metric: took 254.237167ms to LocalClient.Create
	I1001 16:40:32.875085    4679 start.go:128] duration metric: took 2.323315417s to createHost
	I1001 16:40:32.875163    4679 start.go:83] releasing machines lock for "force-systemd-flag-173000", held for 2.323807291s
	W1001 16:40:32.875507    4679 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-173000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-173000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:40:32.885044    4679 out.go:201] 
	W1001 16:40:32.893234    4679 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:40:32.893275    4679 out.go:270] * 
	* 
	W1001 16:40:32.895756    4679 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:40:32.906062    4679 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-173000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-173000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-173000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (103.257583ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-173000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-173000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-173000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-10-01 16:40:33.026568 -0700 PDT m=+3225.916330667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-173000 -n force-systemd-flag-173000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-173000 -n force-systemd-flag-173000: exit status 7 (43.064584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-173000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-173000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-173000
--- FAIL: TestForceSystemdFlag (12.02s)

                                                
                                    
x
+
TestForceSystemdEnv (10.48s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-845000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1001 16:40:15.961774    1659 install.go:79] stdout: 
W1001 16:40:15.961963    1659 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1001 16:40:15.961997    1659 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/001/docker-machine-driver-hyperkit]
I1001 16:40:15.976405    1659 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/001/docker-machine-driver-hyperkit]
I1001 16:40:15.986537    1659 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/001/docker-machine-driver-hyperkit]
I1001 16:40:15.995673    1659 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/001/docker-machine-driver-hyperkit]
I1001 16:40:16.011438    1659 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1001 16:40:16.011544    1659 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1001 16:40:17.794877    1659 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1001 16:40:17.794898    1659 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1001 16:40:17.794948    1659 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1001 16:40:17.794978    1659 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/002/docker-machine-driver-hyperkit
I1001 16:40:18.184842    1659 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1048f2d40 0x1048f2d40 0x1048f2d40 0x1048f2d40 0x1048f2d40 0x1048f2d40 0x1048f2d40] Decompressors:map[bz2:0x14000482db0 gz:0x14000482db8 tar:0x14000482d60 tar.bz2:0x14000482d70 tar.gz:0x14000482d80 tar.xz:0x14000482d90 tar.zst:0x14000482da0 tbz2:0x14000482d70 tgz:0x14000482d80 txz:0x14000482d90 tzst:0x14000482da0 xz:0x14000482dc0 zip:0x14000482dd0 zst:0x14000482dc8] Getters:map[file:0x1400161aa90 http:0x140005b4500 https:0x140005b47d0] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1001 16:40:18.184985    1659 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/002/docker-machine-driver-hyperkit
I1001 16:40:21.178932    1659 install.go:79] stdout: 
W1001 16:40:21.179084    1659 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1001 16:40:21.179104    1659 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/002/docker-machine-driver-hyperkit]
I1001 16:40:21.189535    1659 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/002/docker-machine-driver-hyperkit]
I1001 16:40:21.198760    1659 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/002/docker-machine-driver-hyperkit]
I1001 16:40:21.206989    1659 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-845000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.153675916s)

                                                
                                                
-- stdout --
	* [force-systemd-env-845000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-845000" primary control-plane node in "force-systemd-env-845000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-845000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:40:12.907247    4640 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:40:12.907433    4640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:40:12.907436    4640 out.go:358] Setting ErrFile to fd 2...
	I1001 16:40:12.907439    4640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:40:12.907577    4640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:40:12.909140    4640 out.go:352] Setting JSON to false
	I1001 16:40:12.926702    4640 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4180,"bootTime":1727821832,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:40:12.926808    4640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:40:12.932103    4640 out.go:177] * [force-systemd-env-845000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:40:12.941068    4640 notify.go:220] Checking for updates...
	I1001 16:40:12.945099    4640 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:40:12.953008    4640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:40:12.962020    4640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:40:12.969976    4640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:40:12.977993    4640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:40:12.985863    4640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1001 16:40:12.990485    4640 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:40:12.990542    4640 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:40:12.995019    4640 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:40:13.002009    4640 start.go:297] selected driver: qemu2
	I1001 16:40:13.002019    4640 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:40:13.002027    4640 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:40:13.004776    4640 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:40:13.009064    4640 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:40:13.012145    4640 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 16:40:13.012164    4640 cni.go:84] Creating CNI manager for ""
	I1001 16:40:13.012190    4640 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:40:13.012196    4640 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 16:40:13.012227    4640 start.go:340] cluster config:
	{Name:force-systemd-env-845000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:40:13.016920    4640 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:40:13.024057    4640 out.go:177] * Starting "force-systemd-env-845000" primary control-plane node in "force-systemd-env-845000" cluster
	I1001 16:40:13.028024    4640 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:40:13.028048    4640 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:40:13.028058    4640 cache.go:56] Caching tarball of preloaded images
	I1001 16:40:13.028153    4640 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:40:13.028160    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:40:13.028230    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/force-systemd-env-845000/config.json ...
	I1001 16:40:13.028243    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/force-systemd-env-845000/config.json: {Name:mkbd9d1386aa4468e5727c542558a47f1d07486e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:40:13.028516    4640 start.go:360] acquireMachinesLock for force-systemd-env-845000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:40:13.028559    4640 start.go:364] duration metric: took 33.708µs to acquireMachinesLock for "force-systemd-env-845000"
	I1001 16:40:13.028574    4640 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-845000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:40:13.028613    4640 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:40:13.033116    4640 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1001 16:40:13.053744    4640 start.go:159] libmachine.API.Create for "force-systemd-env-845000" (driver="qemu2")
	I1001 16:40:13.053773    4640 client.go:168] LocalClient.Create starting
	I1001 16:40:13.053860    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:40:13.053896    4640 main.go:141] libmachine: Decoding PEM data...
	I1001 16:40:13.053907    4640 main.go:141] libmachine: Parsing certificate...
	I1001 16:40:13.053961    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:40:13.053991    4640 main.go:141] libmachine: Decoding PEM data...
	I1001 16:40:13.054008    4640 main.go:141] libmachine: Parsing certificate...
	I1001 16:40:13.054444    4640 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:40:13.212776    4640 main.go:141] libmachine: Creating SSH key...
	I1001 16:40:13.510483    4640 main.go:141] libmachine: Creating Disk image...
	I1001 16:40:13.510495    4640 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:40:13.510793    4640 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/disk.qcow2
	I1001 16:40:13.520641    4640 main.go:141] libmachine: STDOUT: 
	I1001 16:40:13.520659    4640 main.go:141] libmachine: STDERR: 
	I1001 16:40:13.520719    4640 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/disk.qcow2 +20000M
	I1001 16:40:13.528862    4640 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:40:13.528885    4640 main.go:141] libmachine: STDERR: 
	I1001 16:40:13.528901    4640 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/disk.qcow2
	I1001 16:40:13.528907    4640 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:40:13.528917    4640 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:40:13.528951    4640 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:2a:6a:97:5f:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/disk.qcow2
	I1001 16:40:13.530562    4640 main.go:141] libmachine: STDOUT: 
	I1001 16:40:13.530575    4640 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:40:13.530596    4640 client.go:171] duration metric: took 476.820125ms to LocalClient.Create
	I1001 16:40:15.531339    4640 start.go:128] duration metric: took 2.502708625s to createHost
	I1001 16:40:15.531415    4640 start.go:83] releasing machines lock for "force-systemd-env-845000", held for 2.502861583s
	W1001 16:40:15.531540    4640 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:40:15.542497    4640 out.go:177] * Deleting "force-systemd-env-845000" in qemu2 ...
	W1001 16:40:15.575191    4640 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:40:15.575218    4640 start.go:729] Will try again in 5 seconds ...
	I1001 16:40:20.575918    4640 start.go:360] acquireMachinesLock for force-systemd-env-845000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:40:20.576335    4640 start.go:364] duration metric: took 352µs to acquireMachinesLock for "force-systemd-env-845000"
	I1001 16:40:20.576428    4640 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-845000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:40:20.576622    4640 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:40:20.588501    4640 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1001 16:40:20.639962    4640 start.go:159] libmachine.API.Create for "force-systemd-env-845000" (driver="qemu2")
	I1001 16:40:20.640137    4640 client.go:168] LocalClient.Create starting
	I1001 16:40:20.640264    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:40:20.640332    4640 main.go:141] libmachine: Decoding PEM data...
	I1001 16:40:20.640351    4640 main.go:141] libmachine: Parsing certificate...
	I1001 16:40:20.640425    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:40:20.640470    4640 main.go:141] libmachine: Decoding PEM data...
	I1001 16:40:20.640486    4640 main.go:141] libmachine: Parsing certificate...
	I1001 16:40:20.641023    4640 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:40:20.811568    4640 main.go:141] libmachine: Creating SSH key...
	I1001 16:40:20.972848    4640 main.go:141] libmachine: Creating Disk image...
	I1001 16:40:20.972858    4640 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:40:20.973113    4640 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/disk.qcow2
	I1001 16:40:20.982663    4640 main.go:141] libmachine: STDOUT: 
	I1001 16:40:20.982685    4640 main.go:141] libmachine: STDERR: 
	I1001 16:40:20.982768    4640 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/disk.qcow2 +20000M
	I1001 16:40:20.990950    4640 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:40:20.990966    4640 main.go:141] libmachine: STDERR: 
	I1001 16:40:20.990983    4640 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/disk.qcow2
	I1001 16:40:20.990993    4640 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:40:20.991002    4640 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:40:20.991037    4640 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:c6:6b:e7:e1:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/force-systemd-env-845000/disk.qcow2
	I1001 16:40:20.992688    4640 main.go:141] libmachine: STDOUT: 
	I1001 16:40:20.992702    4640 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:40:20.992718    4640 client.go:171] duration metric: took 352.575541ms to LocalClient.Create
	I1001 16:40:22.995011    4640 start.go:128] duration metric: took 2.418326167s to createHost
	I1001 16:40:22.995110    4640 start.go:83] releasing machines lock for "force-systemd-env-845000", held for 2.41877625s
	W1001 16:40:22.995440    4640 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-845000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-845000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:40:23.007862    4640 out.go:201] 
	W1001 16:40:23.010853    4640 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:40:23.010894    4640 out.go:270] * 
	* 
	W1001 16:40:23.013793    4640 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:40:23.022786    4640 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-845000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-845000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-845000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (92.406125ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-845000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-845000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-845000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-10-01 16:40:23.12521 -0700 PDT m=+3216.014870709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-845000 -n force-systemd-env-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-845000 -n force-systemd-env-845000: exit status 7 (38.284416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-845000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-845000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-845000
--- FAIL: TestForceSystemdEnv (10.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (30.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-808000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-808000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-jlzmv" [f321200c-9947-4afd-8d2d-fc23f3348b94] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-jlzmv" [f321200c-9947-4afd-8d2d-fc23f3348b94] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004123084s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31434
functional_test.go:1661: error fetching http://192.168.105.4:31434: Get "http://192.168.105.4:31434": dial tcp 192.168.105.4:31434: connect: connection refused
I1001 16:07:38.635984    1659 retry.go:31] will retry after 901.783105ms: Get "http://192.168.105.4:31434": dial tcp 192.168.105.4:31434: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31434: Get "http://192.168.105.4:31434": dial tcp 192.168.105.4:31434: connect: connection refused
I1001 16:07:39.540743    1659 retry.go:31] will retry after 1.373657661s: Get "http://192.168.105.4:31434": dial tcp 192.168.105.4:31434: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31434: Get "http://192.168.105.4:31434": dial tcp 192.168.105.4:31434: connect: connection refused
I1001 16:07:40.918178    1659 retry.go:31] will retry after 1.305229982s: Get "http://192.168.105.4:31434": dial tcp 192.168.105.4:31434: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31434: Get "http://192.168.105.4:31434": dial tcp 192.168.105.4:31434: connect: connection refused
I1001 16:07:42.227324    1659 retry.go:31] will retry after 2.806742167s: Get "http://192.168.105.4:31434": dial tcp 192.168.105.4:31434: connect: connection refused
E1001 16:07:43.595335    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:31434: Get "http://192.168.105.4:31434": dial tcp 192.168.105.4:31434: connect: connection refused
I1001 16:07:45.037033    1659 retry.go:31] will retry after 7.113236196s: Get "http://192.168.105.4:31434": dial tcp 192.168.105.4:31434: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31434: Get "http://192.168.105.4:31434": dial tcp 192.168.105.4:31434: connect: connection refused
I1001 16:07:52.154073    1659 retry.go:31] will retry after 8.28947363s: Get "http://192.168.105.4:31434": dial tcp 192.168.105.4:31434: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31434: Get "http://192.168.105.4:31434": dial tcp 192.168.105.4:31434: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31434: Get "http://192.168.105.4:31434": dial tcp 192.168.105.4:31434: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-808000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-jlzmv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-808000/192.168.105.4
Start Time:       Tue, 01 Oct 2024 16:07:31 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://5753b91b983be9037e5a41ff908df27abe2e2ce8d1004587b1af60e16f87d161
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Tue, 01 Oct 2024 16:07:44 -0700
Finished:     Tue, 01 Oct 2024 16:07:44 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dlqtm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dlqtm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  29s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-jlzmv to functional-808000
Normal   Pulled     16s (x3 over 29s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    16s (x3 over 29s)  kubelet            Created container echoserver-arm
Normal   Started    16s (x3 over 29s)  kubelet            Started container echoserver-arm
Warning  BackOff    4s (x3 over 27s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-jlzmv_default(f321200c-9947-4afd-8d2d-fc23f3348b94)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-808000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-808000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.58.126
IPs:                      10.111.58.126
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31434/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-808000 -n functional-808000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                      Args                                                       |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-808000 image ls                                                                                      | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	| image   | functional-808000 image save                                                                                    | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	|         | kicbase/echo-server:functional-808000                                                                           |                   |         |         |                     |                     |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-808000 image rm                                                                                      | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	|         | kicbase/echo-server:functional-808000                                                                           |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-808000 image ls                                                                                      | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	| image   | functional-808000 image load                                                                                    | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-808000 image ls                                                                                      | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	| image   | functional-808000 image save --daemon                                                                           | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	|         | kicbase/echo-server:functional-808000                                                                           |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-808000 ssh echo                                                                                      | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	|         | hello                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-808000 ssh cat                                                                                       | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	|         | /etc/hostname                                                                                                   |                   |         |         |                     |                     |
	| tunnel  | functional-808000 tunnel                                                                                        | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| tunnel  | functional-808000 tunnel                                                                                        | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| tunnel  | functional-808000 tunnel                                                                                        | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| service | functional-808000 service list                                                                                  | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	| service | functional-808000 service list                                                                                  | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-808000 service                                                                                       | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	|         | --namespace=default --https                                                                                     |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                |                   |         |         |                     |                     |
	| service | functional-808000                                                                                               | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	|         | service hello-node --url                                                                                        |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                |                   |         |         |                     |                     |
	| service | functional-808000 service                                                                                       | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	|         | hello-node --url                                                                                                |                   |         |         |                     |                     |
	| addons  | functional-808000 addons list                                                                                   | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	| addons  | functional-808000 addons list                                                                                   | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-808000 service                                                                                       | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	|         | hello-node-connect --url                                                                                        |                   |         |         |                     |                     |
	| mount   | -p functional-808000                                                                                            | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2475508348/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-808000 ssh findmnt                                                                                   | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-808000 ssh findmnt                                                                                   | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-808000 ssh -- ls                                                                                     | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	|         | -la /mount-9p                                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-808000 ssh cat                                                                                       | functional-808000 | jenkins | v1.34.0 | 01 Oct 24 16:07 PDT | 01 Oct 24 16:07 PDT |
	|         | /mount-9p/test-1727824074419583000                                                                              |                   |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 16:06:28
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 16:06:28.392919    2590 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:06:28.393067    2590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:06:28.393069    2590 out.go:358] Setting ErrFile to fd 2...
	I1001 16:06:28.393071    2590 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:06:28.393202    2590 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:06:28.394170    2590 out.go:352] Setting JSON to false
	I1001 16:06:28.410506    2590 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2156,"bootTime":1727821832,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:06:28.410578    2590 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:06:28.415530    2590 out.go:177] * [functional-808000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:06:28.425582    2590 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:06:28.425608    2590 notify.go:220] Checking for updates...
	I1001 16:06:28.433474    2590 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:06:28.436559    2590 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:06:28.439495    2590 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:06:28.442607    2590 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:06:28.445527    2590 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:06:28.448781    2590 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:06:28.448834    2590 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:06:28.453531    2590 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 16:06:28.460496    2590 start.go:297] selected driver: qemu2
	I1001 16:06:28.460500    2590 start.go:901] validating driver "qemu2" against &{Name:functional-808000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:functional-808000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:06:28.460564    2590 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:06:28.462718    2590 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:06:28.462737    2590 cni.go:84] Creating CNI manager for ""
	I1001 16:06:28.462765    2590 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:06:28.462802    2590 start.go:340] cluster config:
	{Name:functional-808000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-808000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:06:28.466146    2590 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:06:28.474424    2590 out.go:177] * Starting "functional-808000" primary control-plane node in "functional-808000" cluster
	I1001 16:06:28.478458    2590 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:06:28.478481    2590 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:06:28.478489    2590 cache.go:56] Caching tarball of preloaded images
	I1001 16:06:28.478574    2590 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:06:28.478579    2590 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:06:28.478644    2590 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/config.json ...
	I1001 16:06:28.478965    2590 start.go:360] acquireMachinesLock for functional-808000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:06:28.478998    2590 start.go:364] duration metric: took 27µs to acquireMachinesLock for "functional-808000"
	I1001 16:06:28.479005    2590 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:06:28.479007    2590 fix.go:54] fixHost starting: 
	I1001 16:06:28.479606    2590 fix.go:112] recreateIfNeeded on functional-808000: state=Running err=<nil>
	W1001 16:06:28.479613    2590 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:06:28.483498    2590 out.go:177] * Updating the running qemu2 "functional-808000" VM ...
	I1001 16:06:28.491461    2590 machine.go:93] provisionDockerMachine start ...
	I1001 16:06:28.491504    2590 main.go:141] libmachine: Using SSH client type: native
	I1001 16:06:28.491619    2590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e05c00] 0x104e08440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1001 16:06:28.491622    2590 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 16:06:28.538139    2590 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808000
	
	I1001 16:06:28.538150    2590 buildroot.go:166] provisioning hostname "functional-808000"
	I1001 16:06:28.538192    2590 main.go:141] libmachine: Using SSH client type: native
	I1001 16:06:28.538309    2590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e05c00] 0x104e08440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1001 16:06:28.538313    2590 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-808000 && echo "functional-808000" | sudo tee /etc/hostname
	I1001 16:06:28.589849    2590 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-808000
	
	I1001 16:06:28.589898    2590 main.go:141] libmachine: Using SSH client type: native
	I1001 16:06:28.590017    2590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e05c00] 0x104e08440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1001 16:06:28.590023    2590 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-808000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-808000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-808000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 16:06:28.634263    2590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 16:06:28.634270    2590 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19740-1141/.minikube CaCertPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19740-1141/.minikube}
	I1001 16:06:28.634277    2590 buildroot.go:174] setting up certificates
	I1001 16:06:28.634281    2590 provision.go:84] configureAuth start
	I1001 16:06:28.634287    2590 provision.go:143] copyHostCerts
	I1001 16:06:28.634368    2590 exec_runner.go:144] found /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.pem, removing ...
	I1001 16:06:28.634372    2590 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.pem
	I1001 16:06:28.634629    2590 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.pem (1078 bytes)
	I1001 16:06:28.634799    2590 exec_runner.go:144] found /Users/jenkins/minikube-integration/19740-1141/.minikube/cert.pem, removing ...
	I1001 16:06:28.634800    2590 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19740-1141/.minikube/cert.pem
	I1001 16:06:28.634853    2590 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19740-1141/.minikube/cert.pem (1123 bytes)
	I1001 16:06:28.634951    2590 exec_runner.go:144] found /Users/jenkins/minikube-integration/19740-1141/.minikube/key.pem, removing ...
	I1001 16:06:28.634952    2590 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19740-1141/.minikube/key.pem
	I1001 16:06:28.634997    2590 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19740-1141/.minikube/key.pem (1679 bytes)
	I1001 16:06:28.635089    2590 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca-key.pem org=jenkins.functional-808000 san=[127.0.0.1 192.168.105.4 functional-808000 localhost minikube]
	I1001 16:06:28.687472    2590 provision.go:177] copyRemoteCerts
	I1001 16:06:28.687508    2590 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 16:06:28.687515    2590 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/functional-808000/id_rsa Username:docker}
	I1001 16:06:28.711816    2590 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 16:06:28.721142    2590 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1001 16:06:28.729800    2590 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 16:06:28.737960    2590 provision.go:87] duration metric: took 103.672333ms to configureAuth
	I1001 16:06:28.737966    2590 buildroot.go:189] setting minikube options for container-runtime
	I1001 16:06:28.738076    2590 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:06:28.738119    2590 main.go:141] libmachine: Using SSH client type: native
	I1001 16:06:28.738205    2590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e05c00] 0x104e08440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1001 16:06:28.738214    2590 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1001 16:06:28.782413    2590 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1001 16:06:28.782418    2590 buildroot.go:70] root file system type: tmpfs
	I1001 16:06:28.782469    2590 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1001 16:06:28.782523    2590 main.go:141] libmachine: Using SSH client type: native
	I1001 16:06:28.782628    2590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e05c00] 0x104e08440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1001 16:06:28.782660    2590 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1001 16:06:28.833399    2590 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1001 16:06:28.833484    2590 main.go:141] libmachine: Using SSH client type: native
	I1001 16:06:28.833604    2590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e05c00] 0x104e08440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1001 16:06:28.833614    2590 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1001 16:06:28.880582    2590 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 16:06:28.880589    2590 machine.go:96] duration metric: took 389.128375ms to provisionDockerMachine
	I1001 16:06:28.880593    2590 start.go:293] postStartSetup for "functional-808000" (driver="qemu2")
	I1001 16:06:28.880598    2590 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 16:06:28.880659    2590 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 16:06:28.880666    2590 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/functional-808000/id_rsa Username:docker}
	I1001 16:06:28.906565    2590 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 16:06:28.908092    2590 info.go:137] Remote host: Buildroot 2023.02.9
	I1001 16:06:28.908096    2590 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19740-1141/.minikube/addons for local assets ...
	I1001 16:06:28.908154    2590 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19740-1141/.minikube/files for local assets ...
	I1001 16:06:28.908275    2590 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/ssl/certs/16592.pem -> 16592.pem in /etc/ssl/certs
	I1001 16:06:28.908399    2590 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/test/nested/copy/1659/hosts -> hosts in /etc/test/nested/copy/1659
	I1001 16:06:28.908434    2590 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1659
	I1001 16:06:28.911672    2590 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/ssl/certs/16592.pem --> /etc/ssl/certs/16592.pem (1708 bytes)
	I1001 16:06:28.919714    2590 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/test/nested/copy/1659/hosts --> /etc/test/nested/copy/1659/hosts (40 bytes)
	I1001 16:06:28.927827    2590 start.go:296] duration metric: took 47.230708ms for postStartSetup
	I1001 16:06:28.927838    2590 fix.go:56] duration metric: took 448.836208ms for fixHost
	I1001 16:06:28.927876    2590 main.go:141] libmachine: Using SSH client type: native
	I1001 16:06:28.927985    2590 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104e05c00] 0x104e08440 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I1001 16:06:28.927988    2590 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 16:06:28.971499    2590 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727823988.977955192
	
	I1001 16:06:28.971505    2590 fix.go:216] guest clock: 1727823988.977955192
	I1001 16:06:28.971508    2590 fix.go:229] Guest: 2024-10-01 16:06:28.977955192 -0700 PDT Remote: 2024-10-01 16:06:28.927839 -0700 PDT m=+0.554638335 (delta=50.116192ms)
	I1001 16:06:28.971518    2590 fix.go:200] guest clock delta is within tolerance: 50.116192ms
	I1001 16:06:28.971520    2590 start.go:83] releasing machines lock for "functional-808000", held for 492.524333ms
	I1001 16:06:28.971816    2590 ssh_runner.go:195] Run: cat /version.json
	I1001 16:06:28.971822    2590 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/functional-808000/id_rsa Username:docker}
	I1001 16:06:28.971849    2590 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 16:06:28.971864    2590 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/functional-808000/id_rsa Username:docker}
	I1001 16:06:29.039457    2590 ssh_runner.go:195] Run: systemctl --version
	I1001 16:06:29.041550    2590 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 16:06:29.043443    2590 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 16:06:29.043471    2590 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 16:06:29.046978    2590 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1001 16:06:29.046982    2590 start.go:495] detecting cgroup driver to use...
	I1001 16:06:29.047051    2590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 16:06:29.053678    2590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1001 16:06:29.057772    2590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1001 16:06:29.061500    2590 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1001 16:06:29.061524    2590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1001 16:06:29.065037    2590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 16:06:29.068731    2590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1001 16:06:29.072643    2590 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 16:06:29.076385    2590 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 16:06:29.080053    2590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1001 16:06:29.083699    2590 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1001 16:06:29.087371    2590 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1001 16:06:29.091237    2590 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 16:06:29.094901    2590 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 16:06:29.098607    2590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:06:29.207697    2590 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1001 16:06:29.217602    2590 start.go:495] detecting cgroup driver to use...
	I1001 16:06:29.217682    2590 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1001 16:06:29.223493    2590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 16:06:29.229339    2590 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 16:06:29.237534    2590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 16:06:29.243147    2590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1001 16:06:29.248677    2590 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 16:06:29.255042    2590 ssh_runner.go:195] Run: which cri-dockerd
	I1001 16:06:29.256590    2590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1001 16:06:29.259722    2590 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1001 16:06:29.265865    2590 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1001 16:06:29.374444    2590 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1001 16:06:29.481921    2590 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1001 16:06:29.481987    2590 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1001 16:06:29.488559    2590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:06:29.588125    2590 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1001 16:06:41.892639    2590 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.304639583s)
	I1001 16:06:41.892716    2590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1001 16:06:41.898965    2590 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1001 16:06:41.908614    2590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1001 16:06:41.914453    2590 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1001 16:06:42.001812    2590 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1001 16:06:42.087115    2590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:06:42.188034    2590 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1001 16:06:42.195209    2590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1001 16:06:42.200798    2590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:06:42.284759    2590 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1001 16:06:42.313957    2590 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1001 16:06:42.314038    2590 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1001 16:06:42.316765    2590 start.go:563] Will wait 60s for crictl version
	I1001 16:06:42.316808    2590 ssh_runner.go:195] Run: which crictl
	I1001 16:06:42.318351    2590 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 16:06:42.329980    2590 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1001 16:06:42.330085    2590 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1001 16:06:42.337655    2590 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1001 16:06:42.347219    2590 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1001 16:06:42.347380    2590 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1001 16:06:42.355984    2590 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1001 16:06:42.359908    2590 kubeadm.go:883] updating cluster {Name:functional-808000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.31.1 ClusterName:functional-808000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 16:06:42.359954    2590 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:06:42.360001    2590 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1001 16:06:42.366001    2590 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-808000
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1001 16:06:42.366005    2590 docker.go:615] Images already preloaded, skipping extraction
	I1001 16:06:42.366064    2590 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1001 16:06:42.374888    2590 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-808000
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1001 16:06:42.374893    2590 cache_images.go:84] Images are preloaded, skipping loading
	I1001 16:06:42.374896    2590 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.31.1 docker true true} ...
	I1001 16:06:42.374943    2590 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-808000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-808000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 16:06:42.375010    2590 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1001 16:06:42.390551    2590 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1001 16:06:42.390561    2590 cni.go:84] Creating CNI manager for ""
	I1001 16:06:42.390571    2590 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:06:42.390578    2590 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 16:06:42.390587    2590 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-808000 NodeName:functional-808000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 16:06:42.390639    2590 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-808000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 16:06:42.390696    2590 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 16:06:42.394131    2590 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 16:06:42.394168    2590 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 16:06:42.397356    2590 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1001 16:06:42.403326    2590 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 16:06:42.408938    2590 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I1001 16:06:42.414856    2590 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I1001 16:06:42.416178    2590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:06:42.499625    2590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 16:06:42.505726    2590 certs.go:68] Setting up /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000 for IP: 192.168.105.4
	I1001 16:06:42.505734    2590 certs.go:194] generating shared ca certs ...
	I1001 16:06:42.505744    2590 certs.go:226] acquiring lock for ca certs: {Name:mk74f46ad151665c6dd5cd39311b967c23e44dd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:06:42.505894    2590 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.key
	I1001 16:06:42.505957    2590 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/proxy-client-ca.key
	I1001 16:06:42.505962    2590 certs.go:256] generating profile certs ...
	I1001 16:06:42.506027    2590 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.key
	I1001 16:06:42.506079    2590 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/apiserver.key.6c008310
	I1001 16:06:42.506129    2590 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/proxy-client.key
	I1001 16:06:42.506276    2590 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/1659.pem (1338 bytes)
	W1001 16:06:42.506304    2590 certs.go:480] ignoring /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/1659_empty.pem, impossibly tiny 0 bytes
	I1001 16:06:42.506308    2590 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 16:06:42.506326    2590 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem (1078 bytes)
	I1001 16:06:42.506344    2590 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem (1123 bytes)
	I1001 16:06:42.506359    2590 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/key.pem (1679 bytes)
	I1001 16:06:42.506393    2590 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/ssl/certs/16592.pem (1708 bytes)
	I1001 16:06:42.506759    2590 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 16:06:42.515489    2590 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 16:06:42.524432    2590 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 16:06:42.532453    2590 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1001 16:06:42.540594    2590 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1001 16:06:42.548671    2590 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 16:06:42.556627    2590 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 16:06:42.564654    2590 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 16:06:42.572845    2590 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 16:06:42.581420    2590 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/1659.pem --> /usr/share/ca-certificates/1659.pem (1338 bytes)
	I1001 16:06:42.589304    2590 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/ssl/certs/16592.pem --> /usr/share/ca-certificates/16592.pem (1708 bytes)
	I1001 16:06:42.597171    2590 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 16:06:42.603367    2590 ssh_runner.go:195] Run: openssl version
	I1001 16:06:42.605458    2590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 16:06:42.609062    2590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 16:06:42.610674    2590 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I1001 16:06:42.610696    2590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 16:06:42.612745    2590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 16:06:42.616336    2590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1659.pem && ln -fs /usr/share/ca-certificates/1659.pem /etc/ssl/certs/1659.pem"
	I1001 16:06:42.620374    2590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1659.pem
	I1001 16:06:42.622028    2590 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:04 /usr/share/ca-certificates/1659.pem
	I1001 16:06:42.622049    2590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1659.pem
	I1001 16:06:42.624098    2590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1659.pem /etc/ssl/certs/51391683.0"
	I1001 16:06:42.627918    2590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16592.pem && ln -fs /usr/share/ca-certificates/16592.pem /etc/ssl/certs/16592.pem"
	I1001 16:06:42.631944    2590 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16592.pem
	I1001 16:06:42.633567    2590 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:04 /usr/share/ca-certificates/16592.pem
	I1001 16:06:42.633591    2590 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16592.pem
	I1001 16:06:42.635554    2590 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16592.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 16:06:42.639352    2590 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 16:06:42.641076    2590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 16:06:42.643423    2590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 16:06:42.645687    2590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 16:06:42.647872    2590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 16:06:42.649910    2590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 16:06:42.651801    2590 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 16:06:42.653818    2590 kubeadm.go:392] StartCluster: {Name:functional-808000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:functional-808000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:06:42.653903    2590 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1001 16:06:42.659231    2590 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 16:06:42.663015    2590 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 16:06:42.663018    2590 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1001 16:06:42.663046    2590 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 16:06:42.666467    2590 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 16:06:42.666750    2590 kubeconfig.go:125] found "functional-808000" server: "https://192.168.105.4:8441"
	I1001 16:06:42.667423    2590 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 16:06:42.670909    2590 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1001 16:06:42.670912    2590 kubeadm.go:1160] stopping kube-system containers ...
	I1001 16:06:42.670961    2590 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1001 16:06:42.677814    2590 docker.go:483] Stopping containers: [476cb34703d2 2d5d646df688 78a73ac54995 685515af7cba d69d12695453 8a5f1a3dc97e b635b11d284b f231c3349b15 7657759fe949 9ebcfe352077 de3fb8cb0c7d a76a67763f00 7941a037f03f 567362ae7e43 cd0cbe16ffa6 6f3950283865 2638ad9afe68 a5a751edeb5b 3263d2ba4047 43a8a18256cc 5ac02f6f7050 db5e75a04bcc 527c41bbb24b 9bb154e72349 7d25f973b3d0 4721165f0648 4668437bbc0d ad881645a6e4 7530fd6d98ec]
	I1001 16:06:42.677895    2590 ssh_runner.go:195] Run: docker stop 476cb34703d2 2d5d646df688 78a73ac54995 685515af7cba d69d12695453 8a5f1a3dc97e b635b11d284b f231c3349b15 7657759fe949 9ebcfe352077 de3fb8cb0c7d a76a67763f00 7941a037f03f 567362ae7e43 cd0cbe16ffa6 6f3950283865 2638ad9afe68 a5a751edeb5b 3263d2ba4047 43a8a18256cc 5ac02f6f7050 db5e75a04bcc 527c41bbb24b 9bb154e72349 7d25f973b3d0 4721165f0648 4668437bbc0d ad881645a6e4 7530fd6d98ec
	I1001 16:06:42.684744    2590 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 16:06:42.786417    2590 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 16:06:42.792928    2590 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Oct  1 23:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Oct  1 23:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Oct  1 23:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct  1 23:05 /etc/kubernetes/scheduler.conf
	
	I1001 16:06:42.792988    2590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1001 16:06:42.797972    2590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1001 16:06:42.802139    2590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1001 16:06:42.806581    2590 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 16:06:42.806618    2590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 16:06:42.810709    2590 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1001 16:06:42.814455    2590 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 16:06:42.814485    2590 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 16:06:42.818234    2590 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 16:06:42.821900    2590 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:06:42.839538    2590 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:06:43.452842    2590 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:06:43.582040    2590 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:06:43.613666    2590 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:06:43.641491    2590 api_server.go:52] waiting for apiserver process to appear ...
	I1001 16:06:43.641571    2590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:06:44.143694    2590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:06:44.643631    2590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:06:44.649190    2590 api_server.go:72] duration metric: took 1.007710333s to wait for apiserver process to appear ...
	I1001 16:06:44.649197    2590 api_server.go:88] waiting for apiserver healthz status ...
	I1001 16:06:44.649216    2590 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1001 16:06:45.963153    2590 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1001 16:06:45.963162    2590 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1001 16:06:45.963168    2590 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1001 16:06:45.993137    2590 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 16:06:45.993155    2590 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 16:06:46.151305    2590 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1001 16:06:46.154272    2590 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 16:06:46.154277    2590 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 16:06:46.651292    2590 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1001 16:06:46.656955    2590 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1001 16:06:46.656969    2590 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1001 16:06:47.151283    2590 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1001 16:06:47.155018    2590 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I1001 16:06:47.159605    2590 api_server.go:141] control plane version: v1.31.1
	I1001 16:06:47.159614    2590 api_server.go:131] duration metric: took 2.510442875s to wait for apiserver health ...
	I1001 16:06:47.159619    2590 cni.go:84] Creating CNI manager for ""
	I1001 16:06:47.159625    2590 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:06:47.253152    2590 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 16:06:47.259144    2590 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 16:06:47.263351    2590 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 16:06:47.271059    2590 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 16:06:47.275428    2590 system_pods.go:59] 7 kube-system pods found
	I1001 16:06:47.275435    2590 system_pods.go:61] "coredns-7c65d6cfc9-kbmrw" [19c13d54-9a3d-4fc9-8113-878e24eb3c2f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1001 16:06:47.275438    2590 system_pods.go:61] "etcd-functional-808000" [316693cd-7488-49e1-b39e-e924a785265c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1001 16:06:47.275440    2590 system_pods.go:61] "kube-apiserver-functional-808000" [4c41c206-c2b0-4a91-81aa-1b4fedeba895] Pending
	I1001 16:06:47.275442    2590 system_pods.go:61] "kube-controller-manager-functional-808000" [aa95e6a8-a85e-4954-b619-62552a04f34c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1001 16:06:47.275444    2590 system_pods.go:61] "kube-proxy-7nh89" [96bf5332-dbc7-4582-afbc-442701519476] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1001 16:06:47.275446    2590 system_pods.go:61] "kube-scheduler-functional-808000" [9156c4de-8ca4-4679-afd8-7299c6d7cd9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1001 16:06:47.275448    2590 system_pods.go:61] "storage-provisioner" [37e0aa19-c2af-4e8d-8eda-b5ff604af931] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1001 16:06:47.275450    2590 system_pods.go:74] duration metric: took 4.386166ms to wait for pod list to return data ...
	I1001 16:06:47.275453    2590 node_conditions.go:102] verifying NodePressure condition ...
	I1001 16:06:47.276795    2590 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 16:06:47.276800    2590 node_conditions.go:123] node cpu capacity is 2
	I1001 16:06:47.276804    2590 node_conditions.go:105] duration metric: took 1.350167ms to run NodePressure ...
	I1001 16:06:47.276811    2590 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:06:47.500123    2590 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1001 16:06:47.503355    2590 kubeadm.go:739] kubelet initialised
	I1001 16:06:47.503362    2590 kubeadm.go:740] duration metric: took 3.227917ms waiting for restarted kubelet to initialise ...
	I1001 16:06:47.503367    2590 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 16:06:47.506717    2590 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kbmrw" in "kube-system" namespace to be "Ready" ...
	I1001 16:06:49.512009    2590 pod_ready.go:103] pod "coredns-7c65d6cfc9-kbmrw" in "kube-system" namespace has status "Ready":"False"
	I1001 16:06:51.518486    2590 pod_ready.go:103] pod "coredns-7c65d6cfc9-kbmrw" in "kube-system" namespace has status "Ready":"False"
	I1001 16:06:52.018190    2590 pod_ready.go:93] pod "coredns-7c65d6cfc9-kbmrw" in "kube-system" namespace has status "Ready":"True"
	I1001 16:06:52.018212    2590 pod_ready.go:82] duration metric: took 4.511534959s for pod "coredns-7c65d6cfc9-kbmrw" in "kube-system" namespace to be "Ready" ...
	I1001 16:06:52.018227    2590 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-808000" in "kube-system" namespace to be "Ready" ...
	I1001 16:06:54.030658    2590 pod_ready.go:103] pod "etcd-functional-808000" in "kube-system" namespace has status "Ready":"False"
	I1001 16:06:56.529975    2590 pod_ready.go:103] pod "etcd-functional-808000" in "kube-system" namespace has status "Ready":"False"
	I1001 16:06:58.024598    2590 pod_ready.go:93] pod "etcd-functional-808000" in "kube-system" namespace has status "Ready":"True"
	I1001 16:06:58.024608    2590 pod_ready.go:82] duration metric: took 6.006442417s for pod "etcd-functional-808000" in "kube-system" namespace to be "Ready" ...
	I1001 16:06:58.024615    2590 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-808000" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:00.032384    2590 pod_ready.go:103] pod "kube-apiserver-functional-808000" in "kube-system" namespace has status "Ready":"False"
	I1001 16:07:01.032437    2590 pod_ready.go:93] pod "kube-apiserver-functional-808000" in "kube-system" namespace has status "Ready":"True"
	I1001 16:07:01.032448    2590 pod_ready.go:82] duration metric: took 3.007861875s for pod "kube-apiserver-functional-808000" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:01.032456    2590 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-808000" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:03.041294    2590 pod_ready.go:103] pod "kube-controller-manager-functional-808000" in "kube-system" namespace has status "Ready":"False"
	I1001 16:07:03.544037    2590 pod_ready.go:93] pod "kube-controller-manager-functional-808000" in "kube-system" namespace has status "Ready":"True"
	I1001 16:07:03.544049    2590 pod_ready.go:82] duration metric: took 2.511614584s for pod "kube-controller-manager-functional-808000" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:03.544061    2590 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7nh89" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:03.549428    2590 pod_ready.go:93] pod "kube-proxy-7nh89" in "kube-system" namespace has status "Ready":"True"
	I1001 16:07:03.549436    2590 pod_ready.go:82] duration metric: took 5.368416ms for pod "kube-proxy-7nh89" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:03.549444    2590 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-808000" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:03.553949    2590 pod_ready.go:93] pod "kube-scheduler-functional-808000" in "kube-system" namespace has status "Ready":"True"
	I1001 16:07:03.553954    2590 pod_ready.go:82] duration metric: took 4.50525ms for pod "kube-scheduler-functional-808000" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:03.553965    2590 pod_ready.go:39] duration metric: took 16.050773917s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 16:07:03.553984    2590 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 16:07:03.564057    2590 ops.go:34] apiserver oom_adj: -16
	I1001 16:07:03.564064    2590 kubeadm.go:597] duration metric: took 20.901278542s to restartPrimaryControlPlane
	I1001 16:07:03.564072    2590 kubeadm.go:394] duration metric: took 20.91048925s to StartCluster
	I1001 16:07:03.564096    2590 settings.go:142] acquiring lock: {Name:mkd0df72d236cca9ab7a62ebb6aa022c207aaa93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:07:03.564279    2590 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:07:03.564919    2590 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/kubeconfig: {Name:mk6821adb20f42e2e1842a7c6bcaf1ce77531dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:07:03.565314    2590 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:07:03.565325    2590 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 16:07:03.565381    2590 addons.go:69] Setting storage-provisioner=true in profile "functional-808000"
	I1001 16:07:03.565391    2590 addons.go:234] Setting addon storage-provisioner=true in "functional-808000"
	W1001 16:07:03.565396    2590 addons.go:243] addon storage-provisioner should already be in state true
	I1001 16:07:03.565416    2590 host.go:66] Checking if "functional-808000" exists ...
	I1001 16:07:03.565469    2590 addons.go:69] Setting default-storageclass=true in profile "functional-808000"
	I1001 16:07:03.565480    2590 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:07:03.565529    2590 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-808000"
	I1001 16:07:03.567131    2590 addons.go:234] Setting addon default-storageclass=true in "functional-808000"
	W1001 16:07:03.567137    2590 addons.go:243] addon default-storageclass should already be in state true
	I1001 16:07:03.567149    2590 host.go:66] Checking if "functional-808000" exists ...
	I1001 16:07:03.569990    2590 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 16:07:03.569996    2590 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 16:07:03.570006    2590 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/functional-808000/id_rsa Username:docker}
	I1001 16:07:03.573245    2590 out.go:177] * Verifying Kubernetes components...
	I1001 16:07:03.576345    2590 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:07:03.580272    2590 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:07:03.584346    2590 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 16:07:03.584351    2590 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 16:07:03.584359    2590 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/functional-808000/id_rsa Username:docker}
	I1001 16:07:03.704328    2590 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 16:07:03.711156    2590 node_ready.go:35] waiting up to 6m0s for node "functional-808000" to be "Ready" ...
	I1001 16:07:03.712675    2590 node_ready.go:49] node "functional-808000" has status "Ready":"True"
	I1001 16:07:03.712682    2590 node_ready.go:38] duration metric: took 1.515625ms for node "functional-808000" to be "Ready" ...
	I1001 16:07:03.712684    2590 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 16:07:03.715118    2590 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kbmrw" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:03.715399    2590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 16:07:03.717571    2590 pod_ready.go:93] pod "coredns-7c65d6cfc9-kbmrw" in "kube-system" namespace has status "Ready":"True"
	I1001 16:07:03.717575    2590 pod_ready.go:82] duration metric: took 2.451209ms for pod "coredns-7c65d6cfc9-kbmrw" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:03.717578    2590 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-808000" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:03.733230    2590 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 16:07:03.935939    2590 pod_ready.go:93] pod "etcd-functional-808000" in "kube-system" namespace has status "Ready":"True"
	I1001 16:07:03.935945    2590 pod_ready.go:82] duration metric: took 218.367083ms for pod "etcd-functional-808000" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:03.935948    2590 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-808000" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:04.057398    2590 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1001 16:07:04.065152    2590 addons.go:510] duration metric: took 499.838375ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1001 16:07:04.343060    2590 pod_ready.go:93] pod "kube-apiserver-functional-808000" in "kube-system" namespace has status "Ready":"True"
	I1001 16:07:04.343091    2590 pod_ready.go:82] duration metric: took 407.135916ms for pod "kube-apiserver-functional-808000" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:04.343112    2590 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-808000" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:04.744157    2590 pod_ready.go:93] pod "kube-controller-manager-functional-808000" in "kube-system" namespace has status "Ready":"True"
	I1001 16:07:04.744184    2590 pod_ready.go:82] duration metric: took 401.057792ms for pod "kube-controller-manager-functional-808000" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:04.744203    2590 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7nh89" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:05.145447    2590 pod_ready.go:93] pod "kube-proxy-7nh89" in "kube-system" namespace has status "Ready":"True"
	I1001 16:07:05.145476    2590 pod_ready.go:82] duration metric: took 401.265292ms for pod "kube-proxy-7nh89" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:05.145507    2590 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-808000" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:05.540390    2590 pod_ready.go:93] pod "kube-scheduler-functional-808000" in "kube-system" namespace has status "Ready":"True"
	I1001 16:07:05.540464    2590 pod_ready.go:82] duration metric: took 394.89425ms for pod "kube-scheduler-functional-808000" in "kube-system" namespace to be "Ready" ...
	I1001 16:07:05.540479    2590 pod_ready.go:39] duration metric: took 1.8278095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 16:07:05.540498    2590 api_server.go:52] waiting for apiserver process to appear ...
	I1001 16:07:05.540687    2590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:07:05.555601    2590 api_server.go:72] duration metric: took 1.99028575s to wait for apiserver process to appear ...
	I1001 16:07:05.555620    2590 api_server.go:88] waiting for apiserver healthz status ...
	I1001 16:07:05.555643    2590 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I1001 16:07:05.561166    2590 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I1001 16:07:05.562130    2590 api_server.go:141] control plane version: v1.31.1
	I1001 16:07:05.562137    2590 api_server.go:131] duration metric: took 6.512166ms to wait for apiserver health ...
	I1001 16:07:05.562142    2590 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 16:07:05.747572    2590 system_pods.go:59] 7 kube-system pods found
	I1001 16:07:05.747595    2590 system_pods.go:61] "coredns-7c65d6cfc9-kbmrw" [19c13d54-9a3d-4fc9-8113-878e24eb3c2f] Running
	I1001 16:07:05.747602    2590 system_pods.go:61] "etcd-functional-808000" [316693cd-7488-49e1-b39e-e924a785265c] Running
	I1001 16:07:05.747609    2590 system_pods.go:61] "kube-apiserver-functional-808000" [4c41c206-c2b0-4a91-81aa-1b4fedeba895] Running
	I1001 16:07:05.747614    2590 system_pods.go:61] "kube-controller-manager-functional-808000" [aa95e6a8-a85e-4954-b619-62552a04f34c] Running
	I1001 16:07:05.747618    2590 system_pods.go:61] "kube-proxy-7nh89" [96bf5332-dbc7-4582-afbc-442701519476] Running
	I1001 16:07:05.747622    2590 system_pods.go:61] "kube-scheduler-functional-808000" [9156c4de-8ca4-4679-afd8-7299c6d7cd9f] Running
	I1001 16:07:05.747625    2590 system_pods.go:61] "storage-provisioner" [37e0aa19-c2af-4e8d-8eda-b5ff604af931] Running
	I1001 16:07:05.747631    2590 system_pods.go:74] duration metric: took 185.485583ms to wait for pod list to return data ...
	I1001 16:07:05.747638    2590 default_sa.go:34] waiting for default service account to be created ...
	I1001 16:07:05.943674    2590 default_sa.go:45] found service account: "default"
	I1001 16:07:05.943694    2590 default_sa.go:55] duration metric: took 196.049083ms for default service account to be created ...
	I1001 16:07:05.943707    2590 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 16:07:06.143185    2590 system_pods.go:86] 7 kube-system pods found
	I1001 16:07:06.143199    2590 system_pods.go:89] "coredns-7c65d6cfc9-kbmrw" [19c13d54-9a3d-4fc9-8113-878e24eb3c2f] Running
	I1001 16:07:06.143205    2590 system_pods.go:89] "etcd-functional-808000" [316693cd-7488-49e1-b39e-e924a785265c] Running
	I1001 16:07:06.143209    2590 system_pods.go:89] "kube-apiserver-functional-808000" [4c41c206-c2b0-4a91-81aa-1b4fedeba895] Running
	I1001 16:07:06.143214    2590 system_pods.go:89] "kube-controller-manager-functional-808000" [aa95e6a8-a85e-4954-b619-62552a04f34c] Running
	I1001 16:07:06.143218    2590 system_pods.go:89] "kube-proxy-7nh89" [96bf5332-dbc7-4582-afbc-442701519476] Running
	I1001 16:07:06.143221    2590 system_pods.go:89] "kube-scheduler-functional-808000" [9156c4de-8ca4-4679-afd8-7299c6d7cd9f] Running
	I1001 16:07:06.143224    2590 system_pods.go:89] "storage-provisioner" [37e0aa19-c2af-4e8d-8eda-b5ff604af931] Running
	I1001 16:07:06.143231    2590 system_pods.go:126] duration metric: took 199.520334ms to wait for k8s-apps to be running ...
	I1001 16:07:06.143237    2590 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 16:07:06.143367    2590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 16:07:06.157464    2590 system_svc.go:56] duration metric: took 14.222459ms WaitForService to wait for kubelet
	I1001 16:07:06.157477    2590 kubeadm.go:582] duration metric: took 2.592173916s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:07:06.157494    2590 node_conditions.go:102] verifying NodePressure condition ...
	I1001 16:07:06.343974    2590 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1001 16:07:06.344004    2590 node_conditions.go:123] node cpu capacity is 2
	I1001 16:07:06.344029    2590 node_conditions.go:105] duration metric: took 186.528208ms to run NodePressure ...
	I1001 16:07:06.344053    2590 start.go:241] waiting for startup goroutines ...
	I1001 16:07:06.344069    2590 start.go:246] waiting for cluster config update ...
	I1001 16:07:06.344092    2590 start.go:255] writing updated cluster config ...
	I1001 16:07:06.345408    2590 ssh_runner.go:195] Run: rm -f paused
	I1001 16:07:06.410319    2590 start.go:600] kubectl: 1.30.2, cluster: 1.31.1 (minor skew: 1)
	I1001 16:07:06.414381    2590 out.go:177] * Done! kubectl is now configured to use "functional-808000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 01 23:07:44 functional-808000 dockerd[6010]: time="2024-10-01T23:07:44.766810910Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 01 23:07:44 functional-808000 dockerd[6004]: time="2024-10-01T23:07:44.766890690Z" level=info msg="ignoring event" container=5753b91b983be9037e5a41ff908df27abe2e2ce8d1004587b1af60e16f87d161 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 01 23:07:45 functional-808000 dockerd[6010]: time="2024-10-01T23:07:45.546601461Z" level=info msg="shim disconnected" id=966554b12de75f2ee6539b1092a07aaddd8bbba6b6605d7f24a8edf7e494287a namespace=moby
	Oct 01 23:07:45 functional-808000 dockerd[6010]: time="2024-10-01T23:07:45.546631415Z" level=warning msg="cleaning up after shim disconnected" id=966554b12de75f2ee6539b1092a07aaddd8bbba6b6605d7f24a8edf7e494287a namespace=moby
	Oct 01 23:07:45 functional-808000 dockerd[6004]: time="2024-10-01T23:07:45.546842176Z" level=info msg="ignoring event" container=966554b12de75f2ee6539b1092a07aaddd8bbba6b6605d7f24a8edf7e494287a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 01 23:07:45 functional-808000 dockerd[6010]: time="2024-10-01T23:07:45.546635664Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 01 23:07:45 functional-808000 dockerd[6010]: time="2024-10-01T23:07:45.644624032Z" level=info msg="shim disconnected" id=88f18ab9ffded89f739b9517a807389a8d4e446fb870fa667b02ff5908ad413f namespace=moby
	Oct 01 23:07:45 functional-808000 dockerd[6010]: time="2024-10-01T23:07:45.644652278Z" level=warning msg="cleaning up after shim disconnected" id=88f18ab9ffded89f739b9517a807389a8d4e446fb870fa667b02ff5908ad413f namespace=moby
	Oct 01 23:07:45 functional-808000 dockerd[6010]: time="2024-10-01T23:07:45.644656277Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 01 23:07:45 functional-808000 dockerd[6004]: time="2024-10-01T23:07:45.644756888Z" level=info msg="ignoring event" container=88f18ab9ffded89f739b9517a807389a8d4e446fb870fa667b02ff5908ad413f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 01 23:07:47 functional-808000 dockerd[6010]: time="2024-10-01T23:07:47.347699180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 01 23:07:47 functional-808000 dockerd[6010]: time="2024-10-01T23:07:47.347807914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 01 23:07:47 functional-808000 dockerd[6010]: time="2024-10-01T23:07:47.347841034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 01 23:07:47 functional-808000 dockerd[6010]: time="2024-10-01T23:07:47.347927397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 01 23:07:47 functional-808000 cri-dockerd[6279]: time="2024-10-01T23:07:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f7fb3cb5d143f200ac58a4e58bc3a6e3cdd5cb871720037de3caeba64e8cc946/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 01 23:07:48 functional-808000 cri-dockerd[6279]: time="2024-10-01T23:07:48Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Oct 01 23:07:48 functional-808000 dockerd[6010]: time="2024-10-01T23:07:48.208436338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 01 23:07:48 functional-808000 dockerd[6010]: time="2024-10-01T23:07:48.208724922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 01 23:07:48 functional-808000 dockerd[6010]: time="2024-10-01T23:07:48.208748211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 01 23:07:48 functional-808000 dockerd[6010]: time="2024-10-01T23:07:48.208805578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 01 23:07:55 functional-808000 dockerd[6010]: time="2024-10-01T23:07:55.510399091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 01 23:07:55 functional-808000 dockerd[6010]: time="2024-10-01T23:07:55.510440876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 01 23:07:55 functional-808000 dockerd[6010]: time="2024-10-01T23:07:55.510460957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 01 23:07:55 functional-808000 dockerd[6010]: time="2024-10-01T23:07:55.510504701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 01 23:07:55 functional-808000 cri-dockerd[6279]: time="2024-10-01T23:07:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3b18ec40c3c4c8ed8ec8539b8d7454a954dad62b9f3b1720c214d0690be70691/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                           CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2926c59c31f54       nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb   12 seconds ago       Running             myfrontend                0                   f7fb3cb5d143f       sp-pod
	5753b91b983be       72565bf5bbedf                                                                   16 seconds ago       Exited              echoserver-arm            2                   f8b5868a27c8f       hello-node-connect-65d86f57f4-jlzmv
	10f80f6cbd32c       72565bf5bbedf                                                                   26 seconds ago       Exited              echoserver-arm            2                   899e025868580       hello-node-64b4f8f9ff-qznvn
	e70403d4c9afe       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf   36 seconds ago       Running             nginx                     0                   c1e0f0e00ff33       nginx-svc
	031e7aae7dabc       2f6c962e7b831                                                                   About a minute ago   Running             coredns                   2                   74db65ff9224b       coredns-7c65d6cfc9-kbmrw
	754f9255562e2       ba04bb24b9575                                                                   About a minute ago   Running             storage-provisioner       2                   6b42515e0e29c       storage-provisioner
	a1efa7e6337ad       24a140c548c07                                                                   About a minute ago   Running             kube-proxy                2                   2a0e46c93aa83       kube-proxy-7nh89
	cf7d570907824       7f8aa378bb47d                                                                   About a minute ago   Running             kube-scheduler            2                   adb3ab6c03897       kube-scheduler-functional-808000
	d88308ca84bd3       27e3830e14027                                                                   About a minute ago   Running             etcd                      2                   66ba067c7b811       etcd-functional-808000
	3e2fa838499fc       279f381cb3736                                                                   About a minute ago   Running             kube-controller-manager   2                   800464f04aba4       kube-controller-manager-functional-808000
	03c561bbf8f03       d3f53a98c0a9d                                                                   About a minute ago   Running             kube-apiserver            0                   c17b88539d226       kube-apiserver-functional-808000
	476cb34703d26       2f6c962e7b831                                                                   2 minutes ago        Exited              coredns                   1                   685515af7cba8       coredns-7c65d6cfc9-kbmrw
	2d5d646df6885       ba04bb24b9575                                                                   2 minutes ago        Exited              storage-provisioner       1                   8a5f1a3dc97e2       storage-provisioner
	78a73ac54995d       24a140c548c07                                                                   2 minutes ago        Exited              kube-proxy                1                   d69d12695453e       kube-proxy-7nh89
	b635b11d284b3       279f381cb3736                                                                   2 minutes ago        Exited              kube-controller-manager   1                   7941a037f03f8       kube-controller-manager-functional-808000
	f231c3349b15f       27e3830e14027                                                                   2 minutes ago        Exited              etcd                      1                   a76a67763f009       etcd-functional-808000
	9ebcfe3520774       7f8aa378bb47d                                                                   2 minutes ago        Exited              kube-scheduler            1                   567362ae7e43d       kube-scheduler-functional-808000
	
	
	==> coredns [031e7aae7dab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59780 - 45508 "HINFO IN 5787043548843344852.3715902421599422240. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004877542s
	[INFO] 10.244.0.1:3669 - 39462 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000090403s
	[INFO] 10.244.0.1:27306 - 57499 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000118232s
	[INFO] 10.244.0.1:42411 - 53390 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.00152402s
	[INFO] 10.244.0.1:3715 - 45672 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000084488s
	[INFO] 10.244.0.1:40444 - 40775 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.00006474s
	[INFO] 10.244.0.1:27199 - 42087 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000118024s
	
	
	==> coredns [476cb34703d2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44371 - 7731 "HINFO IN 4583678387027756899.7918528004550300090. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004407768s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-808000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-808000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=functional-808000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T16_04_53_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:04:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-808000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:07:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:07:47 +0000   Tue, 01 Oct 2024 23:04:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:07:47 +0000   Tue, 01 Oct 2024 23:04:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:07:47 +0000   Tue, 01 Oct 2024 23:04:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:07:47 +0000   Tue, 01 Oct 2024 23:04:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-808000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 f159a61fda644a5589e10a02532a0975
	  System UUID:                f159a61fda644a5589e10a02532a0975
	  Boot ID:                    d5843616-0125-4ebd-aa3c-7be96b620dc7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox-mount                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  default                     hello-node-64b4f8f9ff-qznvn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  default                     hello-node-connect-65d86f57f4-jlzmv          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-7c65d6cfc9-kbmrw                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3m2s
	  kube-system                 etcd-functional-808000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         3m8s
	  kube-system                 kube-apiserver-functional-808000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-controller-manager-functional-808000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m8s
	  kube-system                 kube-proxy-7nh89                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 kube-scheduler-functional-808000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m8s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m1s                   kube-proxy       
	  Normal  Starting                 73s                    kube-proxy       
	  Normal  Starting                 2m9s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    3m8s (x2 over 3m8s)    kubelet          Node functional-808000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  3m8s (x2 over 3m8s)    kubelet          Node functional-808000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     3m8s (x2 over 3m8s)    kubelet          Node functional-808000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m8s                   kubelet          Starting kubelet.
	  Normal  NodeReady                3m4s                   kubelet          Node functional-808000 status is now: NodeReady
	  Normal  RegisteredNode           3m3s                   node-controller  Node functional-808000 event: Registered Node functional-808000 in Controller
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node functional-808000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node functional-808000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m12s (x7 over 2m12s)  kubelet          Node functional-808000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m6s                   node-controller  Node functional-808000 event: Registered Node functional-808000 in Controller
	  Normal  Starting                 77s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s (x8 over 77s)      kubelet          Node functional-808000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s (x8 over 77s)      kubelet          Node functional-808000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x7 over 77s)      kubelet          Node functional-808000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           71s                    node-controller  Node functional-808000 event: Registered Node functional-808000 in Controller
	
	
	==> dmesg <==
	[  +0.215280] systemd-fstab-generator[4012]: Ignoring "noauto" option for root device
	[  +0.939915] systemd-fstab-generator[4136]: Ignoring "noauto" option for root device
	[  +3.405660] kauditd_printk_skb: 199 callbacks suppressed
	[Oct 1 23:06] systemd-fstab-generator[5080]: Ignoring "noauto" option for root device
	[  +0.057725] kauditd_printk_skb: 35 callbacks suppressed
	[ +20.784894] systemd-fstab-generator[5542]: Ignoring "noauto" option for root device
	[  +0.055268] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.111414] systemd-fstab-generator[5576]: Ignoring "noauto" option for root device
	[  +0.109213] systemd-fstab-generator[5588]: Ignoring "noauto" option for root device
	[  +0.106734] systemd-fstab-generator[5602]: Ignoring "noauto" option for root device
	[  +5.122124] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.309672] systemd-fstab-generator[6232]: Ignoring "noauto" option for root device
	[  +0.084307] systemd-fstab-generator[6244]: Ignoring "noauto" option for root device
	[  +0.099032] systemd-fstab-generator[6256]: Ignoring "noauto" option for root device
	[  +0.100762] systemd-fstab-generator[6271]: Ignoring "noauto" option for root device
	[  +0.209862] systemd-fstab-generator[6443]: Ignoring "noauto" option for root device
	[  +1.078445] systemd-fstab-generator[6567]: Ignoring "noauto" option for root device
	[  +3.422736] kauditd_printk_skb: 199 callbacks suppressed
	[Oct 1 23:07] systemd-fstab-generator[7592]: Ignoring "noauto" option for root device
	[  +0.056778] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.413409] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.976317] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.008822] kauditd_printk_skb: 20 callbacks suppressed
	[ +15.310880] kauditd_printk_skb: 38 callbacks suppressed
	[ +17.123703] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [d88308ca84bd] <==
	{"level":"info","ts":"2024-10-01T23:06:44.345614Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-10-01T23:06:44.345693Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:06:44.345789Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:06:44.351434Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T23:06:44.352348Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-01T23:06:44.352863Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-01T23:06:44.352957Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-01T23:06:44.353353Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-01T23:06:44.353811Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-01T23:06:45.448227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-01T23:06:45.448421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-01T23:06:45.448536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-01T23:06:45.448660Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-10-01T23:06:45.448717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-01T23:06:45.448775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-10-01T23:06:45.448928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-01T23:06:45.451579Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-808000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T23:06:45.451716Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T23:06:45.452187Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T23:06:45.452243Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T23:06:45.452281Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T23:06:45.454044Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T23:06:45.454043Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T23:06:45.456402Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T23:06:45.457076Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> etcd [f231c3349b15] <==
	{"level":"info","ts":"2024-10-01T23:05:50.072418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-01T23:05:50.072440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-10-01T23:05:50.072457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-10-01T23:05:50.072466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-01T23:05:50.072476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-10-01T23:05:50.072483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-01T23:05:50.074003Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T23:05:50.074152Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T23:05:50.074007Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-808000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T23:05:50.074438Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T23:05:50.074523Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T23:05:50.075169Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T23:05:50.075283Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T23:05:50.076264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-10-01T23:05:50.076346Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T23:06:29.623788Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-01T23:06:29.623821Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-808000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-10-01T23:06:29.623869Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T23:06:29.623923Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T23:06:29.638936Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-01T23:06:29.638958Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-01T23:06:29.641064Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-10-01T23:06:29.644842Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-01T23:06:29.644899Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-01T23:06:29.644903Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-808000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 23:08:01 up 3 min,  0 users,  load average: 0.73, 0.54, 0.23
	Linux functional-808000 5.10.207 #1 SMP PREEMPT Mon Sep 23 18:07:35 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [03c561bbf8f0] <==
	I1001 23:06:46.063123       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1001 23:06:46.066404       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1001 23:06:46.066459       1 aggregator.go:171] initial CRD sync complete...
	I1001 23:06:46.066467       1 autoregister_controller.go:144] Starting autoregister controller
	I1001 23:06:46.066470       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1001 23:06:46.066473       1 cache.go:39] Caches are synced for autoregister controller
	I1001 23:06:46.067589       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1001 23:06:46.067600       1 policy_source.go:224] refreshing policies
	I1001 23:06:46.098746       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 23:06:46.950495       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1001 23:06:47.055176       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I1001 23:06:47.055681       1 controller.go:615] quota admission added evaluator for: endpoints
	I1001 23:06:47.059460       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1001 23:06:47.314822       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1001 23:06:47.318894       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1001 23:06:47.330638       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1001 23:06:47.337764       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1001 23:06:47.339716       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1001 23:07:07.800695       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.162.1"}
	I1001 23:07:14.572461       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1001 23:07:14.617345       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.154.176"}
	I1001 23:07:18.081668       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.117.32"}
	I1001 23:07:31.522663       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.58.126"}
	E1001 23:07:45.463166       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49733: use of closed network connection
	E1001 23:07:53.773106       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49738: use of closed network connection
	
	
	==> kube-controller-manager [3e2fa838499f] <==
	I1001 23:06:49.582026       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="385.756737ms"
	I1001 23:06:49.582330       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="39.828µs"
	I1001 23:06:49.927574       1 shared_informer.go:320] Caches are synced for garbage collector
	I1001 23:06:49.927606       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1001 23:06:49.932343       1 shared_informer.go:320] Caches are synced for garbage collector
	I1001 23:06:51.617649       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.770202ms"
	I1001 23:06:51.617990       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="46.161µs"
	I1001 23:07:14.583688       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="9.013215ms"
	I1001 23:07:14.591860       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="8.036196ms"
	I1001 23:07:14.591891       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="9.915µs"
	I1001 23:07:22.175869       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="20.372µs"
	I1001 23:07:23.181595       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="30.579µs"
	I1001 23:07:24.193202       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="23.33µs"
	I1001 23:07:31.485827       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="5.510792ms"
	I1001 23:07:31.490226       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="4.252983ms"
	I1001 23:07:31.490326       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="28.454µs"
	I1001 23:07:31.493920       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="21.58µs"
	I1001 23:07:32.331793       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="36.286µs"
	I1001 23:07:33.334985       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="25.455µs"
	I1001 23:07:35.390250       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="27.995µs"
	I1001 23:07:44.695893       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="85.78µs"
	I1001 23:07:45.551333       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="27.204µs"
	I1001 23:07:47.437682       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-808000"
	I1001 23:07:48.673268       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="44.202µs"
	I1001 23:07:56.684463       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="95.862µs"
	
	
	==> kube-controller-manager [b635b11d284b] <==
	I1001 23:05:54.034594       1 shared_informer.go:320] Caches are synced for HPA
	I1001 23:05:54.034659       1 shared_informer.go:320] Caches are synced for deployment
	I1001 23:05:54.034700       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1001 23:05:54.034708       1 shared_informer.go:320] Caches are synced for expand
	I1001 23:05:54.035871       1 shared_informer.go:320] Caches are synced for taint
	I1001 23:05:54.035910       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1001 23:05:54.035940       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-808000"
	I1001 23:05:54.035963       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1001 23:05:54.043194       1 shared_informer.go:320] Caches are synced for resource quota
	I1001 23:05:54.084238       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1001 23:05:54.084284       1 shared_informer.go:320] Caches are synced for attach detach
	I1001 23:05:54.084429       1 shared_informer.go:320] Caches are synced for daemon sets
	I1001 23:05:54.085401       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1001 23:05:54.086539       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1001 23:05:54.086575       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1001 23:05:54.086585       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1001 23:05:54.086590       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1001 23:05:54.119585       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="2.649314ms"
	I1001 23:05:54.119617       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="15.832µs"
	I1001 23:05:54.135768       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1001 23:05:54.187175       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1001 23:05:54.553123       1 shared_informer.go:320] Caches are synced for garbage collector
	I1001 23:05:54.641311       1 shared_informer.go:320] Caches are synced for garbage collector
	I1001 23:05:54.641665       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1001 23:06:21.413825       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-808000"
	
	
	==> kube-proxy [78a73ac54995] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 23:05:51.907477       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 23:05:51.910703       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1001 23:05:51.910726       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 23:05:51.920256       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 23:05:51.920272       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 23:05:51.920285       1 server_linux.go:169] "Using iptables Proxier"
	I1001 23:05:51.921425       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 23:05:51.921549       1 server.go:483] "Version info" version="v1.31.1"
	I1001 23:05:51.921556       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 23:05:51.922193       1 config.go:199] "Starting service config controller"
	I1001 23:05:51.922199       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 23:05:51.922215       1 config.go:105] "Starting endpoint slice config controller"
	I1001 23:05:51.922217       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 23:05:51.922341       1 config.go:328] "Starting node config controller"
	I1001 23:05:51.922343       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 23:05:52.023088       1 shared_informer.go:320] Caches are synced for node config
	I1001 23:05:52.023088       1 shared_informer.go:320] Caches are synced for service config
	I1001 23:05:52.023116       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [a1efa7e6337a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1001 23:06:47.199882       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1001 23:06:47.204050       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1001 23:06:47.204080       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 23:06:47.262694       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1001 23:06:47.262709       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1001 23:06:47.262720       1 server_linux.go:169] "Using iptables Proxier"
	I1001 23:06:47.263370       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 23:06:47.263478       1 server.go:483] "Version info" version="v1.31.1"
	I1001 23:06:47.263486       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 23:06:47.264065       1 config.go:199] "Starting service config controller"
	I1001 23:06:47.264071       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 23:06:47.264078       1 config.go:105] "Starting endpoint slice config controller"
	I1001 23:06:47.264080       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 23:06:47.268654       1 config.go:328] "Starting node config controller"
	I1001 23:06:47.268663       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 23:06:47.367787       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1001 23:06:47.367787       1 shared_informer.go:320] Caches are synced for service config
	I1001 23:06:47.368729       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9ebcfe352077] <==
	I1001 23:05:49.506890       1 serving.go:386] Generated self-signed cert in-memory
	W1001 23:05:50.573703       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 23:05:50.573750       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 23:05:50.573772       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 23:05:50.573779       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 23:05:50.602332       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1001 23:05:50.602346       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 23:05:50.603284       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 23:05:50.603314       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 23:05:50.603420       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1001 23:05:50.603472       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 23:05:50.704817       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1001 23:06:29.620782       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cf7d57090782] <==
	I1001 23:06:44.679982       1 serving.go:386] Generated self-signed cert in-memory
	W1001 23:06:45.970983       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 23:06:45.975087       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 23:06:45.975105       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 23:06:45.975111       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 23:06:46.000893       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1001 23:06:46.000909       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 23:06:46.001838       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1001 23:06:46.001887       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1001 23:06:46.001898       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 23:06:46.001904       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1001 23:06:46.103113       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 23:07:45 functional-808000 kubelet[6574]: I1001 23:07:45.545553    6574 scope.go:117] "RemoveContainer" containerID="415586c17b42b7db0a881df4c64ba6319a209eb3183019da2e1b3ae40d891c3f"
	Oct 01 23:07:45 functional-808000 kubelet[6574]: I1001 23:07:45.545947    6574 scope.go:117] "RemoveContainer" containerID="5753b91b983be9037e5a41ff908df27abe2e2ce8d1004587b1af60e16f87d161"
	Oct 01 23:07:45 functional-808000 kubelet[6574]: E1001 23:07:45.546012    6574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-jlzmv_default(f321200c-9947-4afd-8d2d-fc23f3348b94)\"" pod="default/hello-node-connect-65d86f57f4-jlzmv" podUID="f321200c-9947-4afd-8d2d-fc23f3348b94"
	Oct 01 23:07:45 functional-808000 kubelet[6574]: I1001 23:07:45.845155    6574 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mypd\" (UniqueName: \"kubernetes.io/host-path/115feb62-41e6-4dd2-ba94-c60bec08c1df-pvc-ad817a40-b95d-4865-b48a-d329222e5ed1\") pod \"115feb62-41e6-4dd2-ba94-c60bec08c1df\" (UID: \"115feb62-41e6-4dd2-ba94-c60bec08c1df\") "
	Oct 01 23:07:45 functional-808000 kubelet[6574]: I1001 23:07:45.845200    6574 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pkvr\" (UniqueName: \"kubernetes.io/projected/115feb62-41e6-4dd2-ba94-c60bec08c1df-kube-api-access-5pkvr\") pod \"115feb62-41e6-4dd2-ba94-c60bec08c1df\" (UID: \"115feb62-41e6-4dd2-ba94-c60bec08c1df\") "
	Oct 01 23:07:45 functional-808000 kubelet[6574]: I1001 23:07:45.845301    6574 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/115feb62-41e6-4dd2-ba94-c60bec08c1df-pvc-ad817a40-b95d-4865-b48a-d329222e5ed1" (OuterVolumeSpecName: "mypd") pod "115feb62-41e6-4dd2-ba94-c60bec08c1df" (UID: "115feb62-41e6-4dd2-ba94-c60bec08c1df"). InnerVolumeSpecName "pvc-ad817a40-b95d-4865-b48a-d329222e5ed1". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Oct 01 23:07:45 functional-808000 kubelet[6574]: I1001 23:07:45.846366    6574 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115feb62-41e6-4dd2-ba94-c60bec08c1df-kube-api-access-5pkvr" (OuterVolumeSpecName: "kube-api-access-5pkvr") pod "115feb62-41e6-4dd2-ba94-c60bec08c1df" (UID: "115feb62-41e6-4dd2-ba94-c60bec08c1df"). InnerVolumeSpecName "kube-api-access-5pkvr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 01 23:07:45 functional-808000 kubelet[6574]: I1001 23:07:45.946052    6574 reconciler_common.go:288] "Volume detached for volume \"pvc-ad817a40-b95d-4865-b48a-d329222e5ed1\" (UniqueName: \"kubernetes.io/host-path/115feb62-41e6-4dd2-ba94-c60bec08c1df-pvc-ad817a40-b95d-4865-b48a-d329222e5ed1\") on node \"functional-808000\" DevicePath \"\""
	Oct 01 23:07:45 functional-808000 kubelet[6574]: I1001 23:07:45.946082    6574 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5pkvr\" (UniqueName: \"kubernetes.io/projected/115feb62-41e6-4dd2-ba94-c60bec08c1df-kube-api-access-5pkvr\") on node \"functional-808000\" DevicePath \"\""
	Oct 01 23:07:46 functional-808000 kubelet[6574]: I1001 23:07:46.579855    6574 scope.go:117] "RemoveContainer" containerID="966554b12de75f2ee6539b1092a07aaddd8bbba6b6605d7f24a8edf7e494287a"
	Oct 01 23:07:46 functional-808000 kubelet[6574]: I1001 23:07:46.606221    6574 scope.go:117] "RemoveContainer" containerID="966554b12de75f2ee6539b1092a07aaddd8bbba6b6605d7f24a8edf7e494287a"
	Oct 01 23:07:46 functional-808000 kubelet[6574]: E1001 23:07:46.606751    6574 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 966554b12de75f2ee6539b1092a07aaddd8bbba6b6605d7f24a8edf7e494287a" containerID="966554b12de75f2ee6539b1092a07aaddd8bbba6b6605d7f24a8edf7e494287a"
	Oct 01 23:07:46 functional-808000 kubelet[6574]: I1001 23:07:46.606775    6574 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"966554b12de75f2ee6539b1092a07aaddd8bbba6b6605d7f24a8edf7e494287a"} err="failed to get container status \"966554b12de75f2ee6539b1092a07aaddd8bbba6b6605d7f24a8edf7e494287a\": rpc error: code = Unknown desc = Error response from daemon: No such container: 966554b12de75f2ee6539b1092a07aaddd8bbba6b6605d7f24a8edf7e494287a"
	Oct 01 23:07:46 functional-808000 kubelet[6574]: E1001 23:07:46.665727    6574 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="115feb62-41e6-4dd2-ba94-c60bec08c1df" containerName="myfrontend"
	Oct 01 23:07:46 functional-808000 kubelet[6574]: I1001 23:07:46.665762    6574 memory_manager.go:354] "RemoveStaleState removing state" podUID="115feb62-41e6-4dd2-ba94-c60bec08c1df" containerName="myfrontend"
	Oct 01 23:07:46 functional-808000 kubelet[6574]: I1001 23:07:46.860471    6574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ad817a40-b95d-4865-b48a-d329222e5ed1\" (UniqueName: \"kubernetes.io/host-path/82053ec0-21ea-4f96-8e21-609876e36853-pvc-ad817a40-b95d-4865-b48a-d329222e5ed1\") pod \"sp-pod\" (UID: \"82053ec0-21ea-4f96-8e21-609876e36853\") " pod="default/sp-pod"
	Oct 01 23:07:46 functional-808000 kubelet[6574]: I1001 23:07:46.860519    6574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xgw8\" (UniqueName: \"kubernetes.io/projected/82053ec0-21ea-4f96-8e21-609876e36853-kube-api-access-4xgw8\") pod \"sp-pod\" (UID: \"82053ec0-21ea-4f96-8e21-609876e36853\") " pod="default/sp-pod"
	Oct 01 23:07:47 functional-808000 kubelet[6574]: I1001 23:07:47.665790    6574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115feb62-41e6-4dd2-ba94-c60bec08c1df" path="/var/lib/kubelet/pods/115feb62-41e6-4dd2-ba94-c60bec08c1df/volumes"
	Oct 01 23:07:48 functional-808000 kubelet[6574]: I1001 23:07:48.663554    6574 scope.go:117] "RemoveContainer" containerID="10f80f6cbd32cefb5d14e12ff1fd5475440c0be0129b1803754ab0102489a716"
	Oct 01 23:07:48 functional-808000 kubelet[6574]: E1001 23:07:48.663755    6574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-qznvn_default(0b732c13-b31f-44a3-863c-bec93580e329)\"" pod="default/hello-node-64b4f8f9ff-qznvn" podUID="0b732c13-b31f-44a3-863c-bec93580e329"
	Oct 01 23:07:48 functional-808000 kubelet[6574]: I1001 23:07:48.673994    6574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.9300519779999998 podStartE2EDuration="2.67392448s" podCreationTimestamp="2024-10-01 23:07:46 +0000 UTC" firstStartedPulling="2024-10-01 23:07:47.416918823 +0000 UTC m=+63.833713015" lastFinishedPulling="2024-10-01 23:07:48.160791283 +0000 UTC m=+64.577585517" observedRunningTime="2024-10-01 23:07:48.621463148 +0000 UTC m=+65.038257341" watchObservedRunningTime="2024-10-01 23:07:48.67392448 +0000 UTC m=+65.090766041"
	Oct 01 23:07:55 functional-808000 kubelet[6574]: I1001 23:07:55.251413    6574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/b9874ea6-1702-4bdf-8052-da9ca0943a98-test-volume\") pod \"busybox-mount\" (UID: \"b9874ea6-1702-4bdf-8052-da9ca0943a98\") " pod="default/busybox-mount"
	Oct 01 23:07:55 functional-808000 kubelet[6574]: I1001 23:07:55.251451    6574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5brds\" (UniqueName: \"kubernetes.io/projected/b9874ea6-1702-4bdf-8052-da9ca0943a98-kube-api-access-5brds\") pod \"busybox-mount\" (UID: \"b9874ea6-1702-4bdf-8052-da9ca0943a98\") " pod="default/busybox-mount"
	Oct 01 23:07:56 functional-808000 kubelet[6574]: I1001 23:07:56.663593    6574 scope.go:117] "RemoveContainer" containerID="5753b91b983be9037e5a41ff908df27abe2e2ce8d1004587b1af60e16f87d161"
	Oct 01 23:07:56 functional-808000 kubelet[6574]: E1001 23:07:56.664828    6574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-jlzmv_default(f321200c-9947-4afd-8d2d-fc23f3348b94)\"" pod="default/hello-node-connect-65d86f57f4-jlzmv" podUID="f321200c-9947-4afd-8d2d-fc23f3348b94"
	
	
	==> storage-provisioner [2d5d646df688] <==
	I1001 23:05:51.872156       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 23:05:51.877462       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 23:05:51.877544       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 23:06:09.293292       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 23:06:09.293641       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-808000_0114c335-4a8a-4966-a74f-4d305cff48ae!
	I1001 23:06:09.294246       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2c1dfcd8-bad2-4d6f-af21-e8efa2dc1780", APIVersion:"v1", ResourceVersion:"529", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-808000_0114c335-4a8a-4966-a74f-4d305cff48ae became leader
	I1001 23:06:09.394333       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-808000_0114c335-4a8a-4966-a74f-4d305cff48ae!
	
	
	==> storage-provisioner [754f9255562e] <==
	I1001 23:06:47.152346       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 23:06:47.167514       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 23:06:47.167648       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 23:07:04.581592       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 23:07:04.581786       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2c1dfcd8-bad2-4d6f-af21-e8efa2dc1780", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-808000_3585b09f-aa0b-4dfd-b8e0-80e931d821e7 became leader
	I1001 23:07:04.582215       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-808000_3585b09f-aa0b-4dfd-b8e0-80e931d821e7!
	I1001 23:07:04.682393       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-808000_3585b09f-aa0b-4dfd-b8e0-80e931d821e7!
	I1001 23:07:33.259510       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1001 23:07:33.259579       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    7db76ca1-25b3-4a11-b8bd-cfcbcd9212fc 343 0 2024-10-01 23:04:58 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-10-01 23:04:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-ad817a40-b95d-4865-b48a-d329222e5ed1 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  ad817a40-b95d-4865-b48a-d329222e5ed1 777 0 2024-10-01 23:07:33 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-10-01 23:07:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-10-01 23:07:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1001 23:07:33.259998       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-ad817a40-b95d-4865-b48a-d329222e5ed1" provisioned
	I1001 23:07:33.260029       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1001 23:07:33.260052       1 volume_store.go:212] Trying to save persistentvolume "pvc-ad817a40-b95d-4865-b48a-d329222e5ed1"
	I1001 23:07:33.260592       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"ad817a40-b95d-4865-b48a-d329222e5ed1", APIVersion:"v1", ResourceVersion:"777", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1001 23:07:33.264159       1 volume_store.go:219] persistentvolume "pvc-ad817a40-b95d-4865-b48a-d329222e5ed1" saved
	I1001 23:07:33.264999       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"ad817a40-b95d-4865-b48a-d329222e5ed1", APIVersion:"v1", ResourceVersion:"777", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-ad817a40-b95d-4865-b48a-d329222e5ed1
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-808000 -n functional-808000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-808000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-808000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-808000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-808000/192.168.105.4
	Start Time:       Tue, 01 Oct 2024 16:07:55 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5brds (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-5brds:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6s    default-scheduler  Successfully assigned default/busybox-mount to functional-808000
	  Normal  Pulling    6s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (30.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (162.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-darwin-arm64 -p ha-056000 node stop m02 -v=7 --alsologtostderr: (12.192309042s)
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr
E1001 16:14:58.495770    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Done: out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr: (1m15.058211417s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 3 (1m15.037648s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 16:16:35.922860    3357 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E1001 16:16:35.922870    3357 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (162.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (150.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E1001 16:17:02.594653    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:17:14.618138    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:17:42.342251    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:392: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m15.058757542s)
ha_test.go:415: expected profile "ha-056000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-056000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-056000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\
":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-056000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"
KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\"
:false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",
\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 3 (1m15.062487667s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 16:19:06.040373    3376 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E1001 16:19:06.040421    3376 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (150.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (185.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-056000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.14035975s)

                                                
                                                
-- stdout --
	* Starting "ha-056000-m02" control-plane node in "ha-056000" cluster
	* Restarting existing qemu2 VM for "ha-056000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-056000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:19:06.113384    3387 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:19:06.114023    3387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:19:06.114030    3387 out.go:358] Setting ErrFile to fd 2...
	I1001 16:19:06.114033    3387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:19:06.114298    3387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:19:06.114773    3387 mustload.go:65] Loading cluster: ha-056000
	I1001 16:19:06.115114    3387 config.go:182] Loaded profile config "ha-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W1001 16:19:06.115441    3387 host.go:58] "ha-056000-m02" host status: Stopped
	I1001 16:19:06.119934    3387 out.go:177] * Starting "ha-056000-m02" control-plane node in "ha-056000" cluster
	I1001 16:19:06.123829    3387 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:19:06.123844    3387 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:19:06.123852    3387 cache.go:56] Caching tarball of preloaded images
	I1001 16:19:06.123930    3387 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:19:06.123938    3387 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:19:06.124013    3387 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/ha-056000/config.json ...
	I1001 16:19:06.124932    3387 start.go:360] acquireMachinesLock for ha-056000-m02: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:19:06.124999    3387 start.go:364] duration metric: took 35.375µs to acquireMachinesLock for "ha-056000-m02"
	I1001 16:19:06.125013    3387 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:19:06.125019    3387 fix.go:54] fixHost starting: m02
	I1001 16:19:06.125150    3387 fix.go:112] recreateIfNeeded on ha-056000-m02: state=Stopped err=<nil>
	W1001 16:19:06.125157    3387 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:19:06.129832    3387 out.go:177] * Restarting existing qemu2 VM for "ha-056000-m02" ...
	I1001 16:19:06.132760    3387 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:19:06.132813    3387 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:58:41:19:6d:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000-m02/disk.qcow2
	I1001 16:19:06.136203    3387 main.go:141] libmachine: STDOUT: 
	I1001 16:19:06.136249    3387 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:19:06.136294    3387 fix.go:56] duration metric: took 11.27375ms for fixHost
	I1001 16:19:06.136299    3387 start.go:83] releasing machines lock for "ha-056000-m02", held for 11.295334ms
	W1001 16:19:06.136309    3387 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:19:06.136379    3387 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:19:06.136385    3387 start.go:729] Will try again in 5 seconds ...
	I1001 16:19:11.138526    3387 start.go:360] acquireMachinesLock for ha-056000-m02: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:19:11.139060    3387 start.go:364] duration metric: took 416.583µs to acquireMachinesLock for "ha-056000-m02"
	I1001 16:19:11.139216    3387 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:19:11.139248    3387 fix.go:54] fixHost starting: m02
	I1001 16:19:11.140005    3387 fix.go:112] recreateIfNeeded on ha-056000-m02: state=Stopped err=<nil>
	W1001 16:19:11.140033    3387 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:19:11.144962    3387 out.go:177] * Restarting existing qemu2 VM for "ha-056000-m02" ...
	I1001 16:19:11.148955    3387 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:19:11.149144    3387 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:58:41:19:6d:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000-m02/disk.qcow2
	I1001 16:19:11.157736    3387 main.go:141] libmachine: STDOUT: 
	I1001 16:19:11.157797    3387 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:19:11.157876    3387 fix.go:56] duration metric: took 18.631416ms for fixHost
	I1001 16:19:11.157892    3387 start.go:83] releasing machines lock for "ha-056000-m02", held for 18.810125ms
	W1001 16:19:11.158118    3387 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:19:11.162979    3387 out.go:201] 
	W1001 16:19:11.166980    3387 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:19:11.167000    3387 out.go:270] * 
	* 
	W1001 16:19:11.174363    3387 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:19:11.178932    3387 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1001 16:19:06.113384    3387 out.go:345] Setting OutFile to fd 1 ...
I1001 16:19:06.114023    3387 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 16:19:06.114030    3387 out.go:358] Setting ErrFile to fd 2...
I1001 16:19:06.114033    3387 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 16:19:06.114298    3387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
I1001 16:19:06.114773    3387 mustload.go:65] Loading cluster: ha-056000
I1001 16:19:06.115114    3387 config.go:182] Loaded profile config "ha-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
W1001 16:19:06.115441    3387 host.go:58] "ha-056000-m02" host status: Stopped
I1001 16:19:06.119934    3387 out.go:177] * Starting "ha-056000-m02" control-plane node in "ha-056000" cluster
I1001 16:19:06.123829    3387 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1001 16:19:06.123844    3387 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I1001 16:19:06.123852    3387 cache.go:56] Caching tarball of preloaded images
I1001 16:19:06.123930    3387 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1001 16:19:06.123938    3387 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I1001 16:19:06.124013    3387 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/ha-056000/config.json ...
I1001 16:19:06.124932    3387 start.go:360] acquireMachinesLock for ha-056000-m02: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1001 16:19:06.124999    3387 start.go:364] duration metric: took 35.375µs to acquireMachinesLock for "ha-056000-m02"
I1001 16:19:06.125013    3387 start.go:96] Skipping create...Using existing machine configuration
I1001 16:19:06.125019    3387 fix.go:54] fixHost starting: m02
I1001 16:19:06.125150    3387 fix.go:112] recreateIfNeeded on ha-056000-m02: state=Stopped err=<nil>
W1001 16:19:06.125157    3387 fix.go:138] unexpected machine state, will restart: <nil>
I1001 16:19:06.129832    3387 out.go:177] * Restarting existing qemu2 VM for "ha-056000-m02" ...
I1001 16:19:06.132760    3387 qemu.go:418] Using hvf for hardware acceleration
I1001 16:19:06.132813    3387 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:58:41:19:6d:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000-m02/disk.qcow2
I1001 16:19:06.136203    3387 main.go:141] libmachine: STDOUT: 
I1001 16:19:06.136249    3387 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1001 16:19:06.136294    3387 fix.go:56] duration metric: took 11.27375ms for fixHost
I1001 16:19:06.136299    3387 start.go:83] releasing machines lock for "ha-056000-m02", held for 11.295334ms
W1001 16:19:06.136309    3387 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1001 16:19:06.136379    3387 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1001 16:19:06.136385    3387 start.go:729] Will try again in 5 seconds ...
I1001 16:19:11.138526    3387 start.go:360] acquireMachinesLock for ha-056000-m02: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1001 16:19:11.139060    3387 start.go:364] duration metric: took 416.583µs to acquireMachinesLock for "ha-056000-m02"
I1001 16:19:11.139216    3387 start.go:96] Skipping create...Using existing machine configuration
I1001 16:19:11.139248    3387 fix.go:54] fixHost starting: m02
I1001 16:19:11.140005    3387 fix.go:112] recreateIfNeeded on ha-056000-m02: state=Stopped err=<nil>
W1001 16:19:11.140033    3387 fix.go:138] unexpected machine state, will restart: <nil>
I1001 16:19:11.144962    3387 out.go:177] * Restarting existing qemu2 VM for "ha-056000-m02" ...
I1001 16:19:11.148955    3387 qemu.go:418] Using hvf for hardware acceleration
I1001 16:19:11.149144    3387 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:58:41:19:6d:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000-m02/disk.qcow2
I1001 16:19:11.157736    3387 main.go:141] libmachine: STDOUT: 
I1001 16:19:11.157797    3387 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1001 16:19:11.157876    3387 fix.go:56] duration metric: took 18.631416ms for fixHost
I1001 16:19:11.157892    3387 start.go:83] releasing machines lock for "ha-056000-m02", held for 18.810125ms
W1001 16:19:11.158118    3387 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1001 16:19:11.162979    3387 out.go:201] 
W1001 16:19:11.166980    3387 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1001 16:19:11.167000    3387 out.go:270] * 
* 
W1001 16:19:11.174363    3387 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1001 16:19:11.178932    3387 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-056000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr: (1m15.071118208s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
ha_test.go:450: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (30.074154625s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 192.168.105.254:8443: i/o timeout

                                                
                                                
** /stderr **
ha_test.go:452: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
E1001 16:22:02.592577    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 3 (1m15.0374335s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 16:22:11.367350    3406 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E1001 16:22:11.367366    3406 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (185.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E1001 16:22:14.614537    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:23:25.684062    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m15.065929666s)
ha_test.go:309: expected profile "ha-056000" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-056000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-056000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1
,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-056000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"Kub
ernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":fa
lse,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"M
ountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 3 (1m15.04141075s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 16:24:41.471146    3426 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E1001 16:24:41.471176    3426 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-056000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-056000 -v=7 --alsologtostderr
E1001 16:27:02.589087    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:27:14.612095    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:28:37.698775    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-056000 -v=7 --alsologtostderr: (5m27.17593325s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-056000 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-056000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.219389542s)

                                                
                                                
-- stdout --
	* [ha-056000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-056000" primary control-plane node in "ha-056000" cluster
	* Restarting existing qemu2 VM for "ha-056000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-056000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:30:08.784515    3774 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:30:08.784995    3774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:30:08.785003    3774 out.go:358] Setting ErrFile to fd 2...
	I1001 16:30:08.785006    3774 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:30:08.785250    3774 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:30:08.786908    3774 out.go:352] Setting JSON to false
	I1001 16:30:08.808143    3774 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3576,"bootTime":1727821832,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:30:08.808236    3774 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:30:08.813663    3774 out.go:177] * [ha-056000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:30:08.821600    3774 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:30:08.821642    3774 notify.go:220] Checking for updates...
	I1001 16:30:08.828585    3774 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:30:08.831591    3774 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:30:08.834501    3774 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:30:08.837670    3774 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:30:08.840623    3774 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:30:08.842337    3774 config.go:182] Loaded profile config "ha-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:30:08.842391    3774 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:30:08.846625    3774 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 16:30:08.853485    3774 start.go:297] selected driver: qemu2
	I1001 16:30:08.853492    3774 start.go:901] validating driver "qemu2" against &{Name:ha-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:ha-056000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass
:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:30:08.853602    3774 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:30:08.856587    3774 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:30:08.856611    3774 cni.go:84] Creating CNI manager for ""
	I1001 16:30:08.856648    3774 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1001 16:30:08.856697    3774 start.go:340] cluster config:
	{Name:ha-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-056000 Namespace:default APIServerHAVIP:192.168.
105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:fals
e inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:30:08.861419    3774 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:30:08.869641    3774 out.go:177] * Starting "ha-056000" primary control-plane node in "ha-056000" cluster
	I1001 16:30:08.873620    3774 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:30:08.873638    3774 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:30:08.873648    3774 cache.go:56] Caching tarball of preloaded images
	I1001 16:30:08.873714    3774 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:30:08.873721    3774 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:30:08.873819    3774 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/ha-056000/config.json ...
	I1001 16:30:08.874279    3774 start.go:360] acquireMachinesLock for ha-056000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:30:08.874316    3774 start.go:364] duration metric: took 31µs to acquireMachinesLock for "ha-056000"
	I1001 16:30:08.874325    3774 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:30:08.874329    3774 fix.go:54] fixHost starting: 
	I1001 16:30:08.874460    3774 fix.go:112] recreateIfNeeded on ha-056000: state=Stopped err=<nil>
	W1001 16:30:08.874470    3774 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:30:08.877627    3774 out.go:177] * Restarting existing qemu2 VM for "ha-056000" ...
	I1001 16:30:08.885589    3774 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:30:08.885635    3774 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:41:86:66:1e:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000/disk.qcow2
	I1001 16:30:08.887820    3774 main.go:141] libmachine: STDOUT: 
	I1001 16:30:08.887840    3774 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:30:08.887874    3774 fix.go:56] duration metric: took 13.542333ms for fixHost
	I1001 16:30:08.887878    3774 start.go:83] releasing machines lock for "ha-056000", held for 13.557709ms
	W1001 16:30:08.887885    3774 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:30:08.887927    3774 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:30:08.887932    3774 start.go:729] Will try again in 5 seconds ...
	I1001 16:30:13.890044    3774 start.go:360] acquireMachinesLock for ha-056000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:30:13.890258    3774 start.go:364] duration metric: took 164.5µs to acquireMachinesLock for "ha-056000"
	I1001 16:30:13.890328    3774 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:30:13.890343    3774 fix.go:54] fixHost starting: 
	I1001 16:30:13.890768    3774 fix.go:112] recreateIfNeeded on ha-056000: state=Stopped err=<nil>
	W1001 16:30:13.890788    3774 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:30:13.895660    3774 out.go:177] * Restarting existing qemu2 VM for "ha-056000" ...
	I1001 16:30:13.900973    3774 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:30:13.901094    3774 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:41:86:66:1e:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000/disk.qcow2
	I1001 16:30:13.906137    3774 main.go:141] libmachine: STDOUT: 
	I1001 16:30:13.906196    3774 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:30:13.906270    3774 fix.go:56] duration metric: took 15.927833ms for fixHost
	I1001 16:30:13.906288    3774 start.go:83] releasing machines lock for "ha-056000", held for 16.014667ms
	W1001 16:30:13.906506    3774 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:30:13.912717    3774 out.go:201] 
	W1001 16:30:13.915728    3774 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:30:13.915752    3774 out.go:270] * 
	* 
	W1001 16:30:13.918201    3774 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:30:13.926716    3774 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-056000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-056000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 7 (32.902708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-056000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.371083ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-056000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-056000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:30:14.069601    3787 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:30:14.070083    3787 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:30:14.070087    3787 out.go:358] Setting ErrFile to fd 2...
	I1001 16:30:14.070090    3787 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:30:14.070284    3787 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:30:14.070594    3787 mustload.go:65] Loading cluster: ha-056000
	I1001 16:30:14.070952    3787 config.go:182] Loaded profile config "ha-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W1001 16:30:14.071258    3787 out.go:270] ! The control-plane node ha-056000 host is not running (will try others): state=Stopped
	! The control-plane node ha-056000 host is not running (will try others): state=Stopped
	W1001 16:30:14.071375    3787 out.go:270] ! The control-plane node ha-056000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-056000-m02 host is not running (will try others): state=Stopped
	I1001 16:30:14.074728    3787 out.go:177] * The control-plane node ha-056000-m03 host is not running: state=Stopped
	I1001 16:30:14.077419    3787 out.go:177]   To start a cluster, run: "minikube start -p ha-056000"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-056000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr: exit status 7 (30.283875ms)

                                                
                                                
-- stdout --
	ha-056000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-056000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-056000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-056000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:30:14.109462    3789 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:30:14.109620    3789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:30:14.109623    3789 out.go:358] Setting ErrFile to fd 2...
	I1001 16:30:14.109626    3789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:30:14.109760    3789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:30:14.109890    3789 out.go:352] Setting JSON to false
	I1001 16:30:14.109904    3789 mustload.go:65] Loading cluster: ha-056000
	I1001 16:30:14.109959    3789 notify.go:220] Checking for updates...
	I1001 16:30:14.110159    3789 config.go:182] Loaded profile config "ha-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:30:14.110169    3789 status.go:174] checking status of ha-056000 ...
	I1001 16:30:14.110415    3789 status.go:371] ha-056000 host status = "Stopped" (err=<nil>)
	I1001 16:30:14.110419    3789 status.go:384] host is not running, skipping remaining checks
	I1001 16:30:14.110421    3789 status.go:176] ha-056000 status: &{Name:ha-056000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 16:30:14.110431    3789 status.go:174] checking status of ha-056000-m02 ...
	I1001 16:30:14.110527    3789 status.go:371] ha-056000-m02 host status = "Stopped" (err=<nil>)
	I1001 16:30:14.110529    3789 status.go:384] host is not running, skipping remaining checks
	I1001 16:30:14.110531    3789 status.go:176] ha-056000-m02 status: &{Name:ha-056000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 16:30:14.110535    3789 status.go:174] checking status of ha-056000-m03 ...
	I1001 16:30:14.110624    3789 status.go:371] ha-056000-m03 host status = "Stopped" (err=<nil>)
	I1001 16:30:14.110628    3789 status.go:384] host is not running, skipping remaining checks
	I1001 16:30:14.110630    3789 status.go:176] ha-056000-m03 status: &{Name:ha-056000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 16:30:14.110633    3789 status.go:174] checking status of ha-056000-m04 ...
	I1001 16:30:14.110728    3789 status.go:371] ha-056000-m04 host status = "Stopped" (err=<nil>)
	I1001 16:30:14.110731    3789 status.go:384] host is not running, skipping remaining checks
	I1001 16:30:14.110733    3789 status.go:176] ha-056000-m04 status: &{Name:ha-056000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 7 (29.839459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-056000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-056000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-056000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-056000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,
\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"log
viewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP
\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 7 (29.872083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (300.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 stop -v=7 --alsologtostderr
E1001 16:32:02.586312    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:32:14.609533    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-darwin-arm64 -p ha-056000 stop -v=7 --alsologtostderr: (5m0.131761792s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr: exit status 7 (68.278125ms)

                                                
                                                
-- stdout --
	ha-056000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-056000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-056000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-056000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:35:14.412981    3896 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:35:14.413182    3896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:35:14.413187    3896 out.go:358] Setting ErrFile to fd 2...
	I1001 16:35:14.413190    3896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:35:14.413367    3896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:35:14.413534    3896 out.go:352] Setting JSON to false
	I1001 16:35:14.413551    3896 mustload.go:65] Loading cluster: ha-056000
	I1001 16:35:14.413597    3896 notify.go:220] Checking for updates...
	I1001 16:35:14.413859    3896 config.go:182] Loaded profile config "ha-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:35:14.413870    3896 status.go:174] checking status of ha-056000 ...
	I1001 16:35:14.414186    3896 status.go:371] ha-056000 host status = "Stopped" (err=<nil>)
	I1001 16:35:14.414190    3896 status.go:384] host is not running, skipping remaining checks
	I1001 16:35:14.414192    3896 status.go:176] ha-056000 status: &{Name:ha-056000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 16:35:14.414206    3896 status.go:174] checking status of ha-056000-m02 ...
	I1001 16:35:14.414330    3896 status.go:371] ha-056000-m02 host status = "Stopped" (err=<nil>)
	I1001 16:35:14.414334    3896 status.go:384] host is not running, skipping remaining checks
	I1001 16:35:14.414336    3896 status.go:176] ha-056000-m02 status: &{Name:ha-056000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 16:35:14.414341    3896 status.go:174] checking status of ha-056000-m03 ...
	I1001 16:35:14.414459    3896 status.go:371] ha-056000-m03 host status = "Stopped" (err=<nil>)
	I1001 16:35:14.414463    3896 status.go:384] host is not running, skipping remaining checks
	I1001 16:35:14.414465    3896 status.go:176] ha-056000-m03 status: &{Name:ha-056000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 16:35:14.414469    3896 status.go:174] checking status of ha-056000-m04 ...
	I1001 16:35:14.414581    3896 status.go:371] ha-056000-m04 host status = "Stopped" (err=<nil>)
	I1001 16:35:14.414584    3896 status.go:384] host is not running, skipping remaining checks
	I1001 16:35:14.414586    3896 status.go:176] ha-056000-m04 status: &{Name:ha-056000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": ha-056000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": ha-056000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr": ha-056000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-056000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 7 (32.258584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (300.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-056000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-056000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.186398167s)

                                                
                                                
-- stdout --
	* [ha-056000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-056000" primary control-plane node in "ha-056000" cluster
	* Restarting existing qemu2 VM for "ha-056000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-056000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:35:14.476095    3900 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:35:14.476224    3900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:35:14.476228    3900 out.go:358] Setting ErrFile to fd 2...
	I1001 16:35:14.476230    3900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:35:14.476382    3900 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:35:14.477572    3900 out.go:352] Setting JSON to false
	I1001 16:35:14.493929    3900 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3882,"bootTime":1727821832,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:35:14.494008    3900 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:35:14.499202    3900 out.go:177] * [ha-056000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:35:14.505378    3900 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:35:14.505459    3900 notify.go:220] Checking for updates...
	I1001 16:35:14.512318    3900 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:35:14.515342    3900 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:35:14.518248    3900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:35:14.521326    3900 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:35:14.524344    3900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:35:14.525958    3900 config.go:182] Loaded profile config "ha-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:35:14.526250    3900 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:35:14.530365    3900 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 16:35:14.537208    3900 start.go:297] selected driver: qemu2
	I1001 16:35:14.537214    3900 start.go:901] validating driver "qemu2" against &{Name:ha-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:ha-056000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storag
eclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:35:14.537297    3900 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:35:14.539591    3900 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:35:14.539615    3900 cni.go:84] Creating CNI manager for ""
	I1001 16:35:14.539638    3900 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1001 16:35:14.539687    3900 start.go:340] cluster config:
	{Name:ha-056000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-056000 Namespace:default APIServerHAVIP:192.168.
105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:fals
e inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:35:14.543251    3900 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:35:14.551376    3900 out.go:177] * Starting "ha-056000" primary control-plane node in "ha-056000" cluster
	I1001 16:35:14.555302    3900 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:35:14.555316    3900 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:35:14.555326    3900 cache.go:56] Caching tarball of preloaded images
	I1001 16:35:14.555382    3900 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:35:14.555387    3900 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:35:14.555469    3900 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/ha-056000/config.json ...
	I1001 16:35:14.555895    3900 start.go:360] acquireMachinesLock for ha-056000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:35:14.555929    3900 start.go:364] duration metric: took 27.708µs to acquireMachinesLock for "ha-056000"
	I1001 16:35:14.555937    3900 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:35:14.555941    3900 fix.go:54] fixHost starting: 
	I1001 16:35:14.556060    3900 fix.go:112] recreateIfNeeded on ha-056000: state=Stopped err=<nil>
	W1001 16:35:14.556069    3900 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:35:14.560373    3900 out.go:177] * Restarting existing qemu2 VM for "ha-056000" ...
	I1001 16:35:14.568287    3900 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:35:14.568319    3900 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:41:86:66:1e:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000/disk.qcow2
	I1001 16:35:14.570212    3900 main.go:141] libmachine: STDOUT: 
	I1001 16:35:14.570229    3900 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:35:14.570260    3900 fix.go:56] duration metric: took 14.317083ms for fixHost
	I1001 16:35:14.570266    3900 start.go:83] releasing machines lock for "ha-056000", held for 14.333041ms
	W1001 16:35:14.570273    3900 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:35:14.570309    3900 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:35:14.570313    3900 start.go:729] Will try again in 5 seconds ...
	I1001 16:35:19.572381    3900 start.go:360] acquireMachinesLock for ha-056000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:35:19.572807    3900 start.go:364] duration metric: took 299.625µs to acquireMachinesLock for "ha-056000"
	I1001 16:35:19.572976    3900 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:35:19.572996    3900 fix.go:54] fixHost starting: 
	I1001 16:35:19.573721    3900 fix.go:112] recreateIfNeeded on ha-056000: state=Stopped err=<nil>
	W1001 16:35:19.573748    3900 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:35:19.577959    3900 out.go:177] * Restarting existing qemu2 VM for "ha-056000" ...
	I1001 16:35:19.586128    3900 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:35:19.586339    3900 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:41:86:66:1e:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/ha-056000/disk.qcow2
	I1001 16:35:19.595302    3900 main.go:141] libmachine: STDOUT: 
	I1001 16:35:19.595353    3900 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:35:19.595415    3900 fix.go:56] duration metric: took 22.419042ms for fixHost
	I1001 16:35:19.595432    3900 start.go:83] releasing machines lock for "ha-056000", held for 22.575583ms
	W1001 16:35:19.595585    3900 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-056000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:35:19.600151    3900 out.go:201] 
	W1001 16:35:19.608289    3900 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:35:19.608329    3900 out.go:270] * 
	* 
	W1001 16:35:19.610980    3900 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:35:19.621175    3900 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-056000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 7 (67.88225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-056000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-056000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-056000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-056000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,
\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"log
viewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP
\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 7 (29.671625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-056000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-056000 --control-plane -v=7 --alsologtostderr: exit status 83 (42.69325ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-056000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-056000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:35:19.813707    3915 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:35:19.813858    3915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:35:19.813861    3915 out.go:358] Setting ErrFile to fd 2...
	I1001 16:35:19.813863    3915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:35:19.813992    3915 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:35:19.814230    3915 mustload.go:65] Loading cluster: ha-056000
	I1001 16:35:19.814468    3915 config.go:182] Loaded profile config "ha-056000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	W1001 16:35:19.814813    3915 out.go:270] ! The control-plane node ha-056000 host is not running (will try others): state=Stopped
	! The control-plane node ha-056000 host is not running (will try others): state=Stopped
	W1001 16:35:19.814911    3915 out.go:270] ! The control-plane node ha-056000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-056000-m02 host is not running (will try others): state=Stopped
	I1001 16:35:19.819242    3915 out.go:177] * The control-plane node ha-056000-m03 host is not running: state=Stopped
	I1001 16:35:19.823193    3915 out.go:177]   To start a cluster, run: "minikube start -p ha-056000"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-056000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 7 (29.626208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:309: expected profile "ha-056000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-056000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-056000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-056000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logvie
wer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":
\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-056000 -n ha-056000: exit status 7 (30.87925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-056000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (10.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-135000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-135000 --driver=qemu2 : exit status 80 (9.977403958s)

                                                
                                                
-- stdout --
	* [image-135000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-135000" primary control-plane node in "image-135000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-135000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-135000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-135000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-135000 -n image-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-135000 -n image-135000: exit status 7 (68.029709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (10.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-906000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-906000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.8199585s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c8676bb2-2733-4955-a3a0-ae4f28b80285","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-906000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b75e48fe-3640-4d1f-a26f-2c3be9035707","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19740"}}
	{"specversion":"1.0","id":"650adefe-bf84-4c5d-9bd1-4f219433eafa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig"}}
	{"specversion":"1.0","id":"33ab3363-d871-4511-b565-39c28938311c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"1544c0ca-b63f-4f78-ab2b-2c103bfd98f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d07b2acc-1c81-49c6-8c41-1a45b97830ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube"}}
	{"specversion":"1.0","id":"182c7fc9-8230-43b5-95fb-2daa5e92f29b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"25171779-e1cb-4788-8d38-36f6cda80a4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"63509610-eafa-4a52-bdfb-e95b0945e32a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"f23e35f3-597f-46b7-90de-1eb268067cd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-906000\" primary control-plane node in \"json-output-906000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c068b55d-356c-4572-bc4d-acf701e0d9f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"4e2b2a35-786b-42cd-ba9d-a8c3c7a37e53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-906000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"fcc4f2f1-fbd4-48e1-bd50-10cad93b7c8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"20e888d2-4c2c-4767-a713-00f4f79df998","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"3f316cb0-16fa-46e0-9846-ccd3254b789a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-906000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"45544f98-a500-4157-a5d1-0d3c2ccba0ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"bb6608bb-14fd-4739-bc99-989129bea8a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-906000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.82s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-906000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-906000 --output=json --user=testUser: exit status 83 (77.605834ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7fe7b1a7-c7d3-4884-b817-eccc7ae7705f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-906000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"9d3d9f62-bd61-4a15-b93a-7ebb33989cbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-906000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-906000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-906000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-906000 --output=json --user=testUser: exit status 83 (45.352916ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-906000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-906000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-906000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-906000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.21s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-360000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-360000 --driver=qemu2 : exit status 80 (9.914260584s)

                                                
                                                
-- stdout --
	* [first-360000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-360000" primary control-plane node in "first-360000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-360000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-360000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-01 16:35:54.202116 -0700 PDT m=+2947.089058042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-361000 -n second-361000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-361000 -n second-361000: exit status 85 (76.195667ms)

                                                
                                                
-- stdout --
	* Profile "second-361000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-361000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-361000" host is not running, skipping log retrieval (state="* Profile \"second-361000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-361000\"")
helpers_test.go:175: Cleaning up "second-361000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-361000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-01 16:35:54.389496 -0700 PDT m=+2947.276439667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-360000 -n first-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-360000 -n first-360000: exit status 7 (30.218125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-360000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-360000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-360000
--- FAIL: TestMinikubeProfile (10.21s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-555000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-555000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.548066333s)

                                                
                                                
-- stdout --
	* [mount-start-1-555000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-555000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-555000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-555000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-555000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-555000 -n mount-start-1-555000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-555000 -n mount-start-1-555000: exit status 7 (68.482166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-555000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.62s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-603000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-603000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.938208375s)

                                                
                                                
-- stdout --
	* [multinode-603000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-603000" primary control-plane node in "multinode-603000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-603000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:36:05.330658    4081 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:36:05.330783    4081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:36:05.330787    4081 out.go:358] Setting ErrFile to fd 2...
	I1001 16:36:05.330789    4081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:36:05.330921    4081 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:36:05.331925    4081 out.go:352] Setting JSON to false
	I1001 16:36:05.348028    4081 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3933,"bootTime":1727821832,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:36:05.348101    4081 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:36:05.353856    4081 out.go:177] * [multinode-603000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:36:05.360783    4081 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:36:05.360838    4081 notify.go:220] Checking for updates...
	I1001 16:36:05.367722    4081 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:36:05.370783    4081 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:36:05.373778    4081 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:36:05.376668    4081 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:36:05.379848    4081 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:36:05.382949    4081 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:36:05.386755    4081 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:36:05.393792    4081 start.go:297] selected driver: qemu2
	I1001 16:36:05.393799    4081 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:36:05.393807    4081 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:36:05.396084    4081 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:36:05.397699    4081 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:36:05.400905    4081 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:36:05.400931    4081 cni.go:84] Creating CNI manager for ""
	I1001 16:36:05.400957    4081 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1001 16:36:05.400961    4081 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 16:36:05.401003    4081 start.go:340] cluster config:
	{Name:multinode-603000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_v
mnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:36:05.404623    4081 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:36:05.411759    4081 out.go:177] * Starting "multinode-603000" primary control-plane node in "multinode-603000" cluster
	I1001 16:36:05.415805    4081 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:36:05.415820    4081 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:36:05.415829    4081 cache.go:56] Caching tarball of preloaded images
	I1001 16:36:05.415893    4081 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:36:05.415899    4081 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:36:05.416144    4081 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/multinode-603000/config.json ...
	I1001 16:36:05.416156    4081 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/multinode-603000/config.json: {Name:mkbe9dfa7bf8e774679bdfa7a017af0b83bb9be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:36:05.416397    4081 start.go:360] acquireMachinesLock for multinode-603000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:36:05.416434    4081 start.go:364] duration metric: took 30.584µs to acquireMachinesLock for "multinode-603000"
	I1001 16:36:05.416447    4081 start.go:93] Provisioning new machine with config: &{Name:multinode-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:multinode-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:36:05.416477    4081 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:36:05.420776    4081 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 16:36:05.438912    4081 start.go:159] libmachine.API.Create for "multinode-603000" (driver="qemu2")
	I1001 16:36:05.438946    4081 client.go:168] LocalClient.Create starting
	I1001 16:36:05.439003    4081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:36:05.439041    4081 main.go:141] libmachine: Decoding PEM data...
	I1001 16:36:05.439050    4081 main.go:141] libmachine: Parsing certificate...
	I1001 16:36:05.439095    4081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:36:05.439118    4081 main.go:141] libmachine: Decoding PEM data...
	I1001 16:36:05.439128    4081 main.go:141] libmachine: Parsing certificate...
	I1001 16:36:05.439477    4081 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:36:05.602787    4081 main.go:141] libmachine: Creating SSH key...
	I1001 16:36:05.766615    4081 main.go:141] libmachine: Creating Disk image...
	I1001 16:36:05.766625    4081 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:36:05.766975    4081 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/disk.qcow2
	I1001 16:36:05.776250    4081 main.go:141] libmachine: STDOUT: 
	I1001 16:36:05.776274    4081 main.go:141] libmachine: STDERR: 
	I1001 16:36:05.776345    4081 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/disk.qcow2 +20000M
	I1001 16:36:05.784097    4081 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:36:05.784110    4081 main.go:141] libmachine: STDERR: 
	I1001 16:36:05.784124    4081 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/disk.qcow2
	I1001 16:36:05.784129    4081 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:36:05.784140    4081 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:36:05.784166    4081 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:1f:5e:80:31:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/disk.qcow2
	I1001 16:36:05.785698    4081 main.go:141] libmachine: STDOUT: 
	I1001 16:36:05.785713    4081 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:36:05.785735    4081 client.go:171] duration metric: took 346.787625ms to LocalClient.Create
	I1001 16:36:07.787971    4081 start.go:128] duration metric: took 2.371475167s to createHost
	I1001 16:36:07.788054    4081 start.go:83] releasing machines lock for "multinode-603000", held for 2.37163375s
	W1001 16:36:07.788115    4081 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:36:07.801334    4081 out.go:177] * Deleting "multinode-603000" in qemu2 ...
	W1001 16:36:07.839908    4081 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:36:07.839936    4081 start.go:729] Will try again in 5 seconds ...
	I1001 16:36:12.842139    4081 start.go:360] acquireMachinesLock for multinode-603000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:36:12.842715    4081 start.go:364] duration metric: took 430.834µs to acquireMachinesLock for "multinode-603000"
	I1001 16:36:12.842849    4081 start.go:93] Provisioning new machine with config: &{Name:multinode-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:multinode-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:36:12.843146    4081 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:36:12.861874    4081 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 16:36:12.911885    4081 start.go:159] libmachine.API.Create for "multinode-603000" (driver="qemu2")
	I1001 16:36:12.911936    4081 client.go:168] LocalClient.Create starting
	I1001 16:36:12.912056    4081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:36:12.912121    4081 main.go:141] libmachine: Decoding PEM data...
	I1001 16:36:12.912140    4081 main.go:141] libmachine: Parsing certificate...
	I1001 16:36:12.912204    4081 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:36:12.912248    4081 main.go:141] libmachine: Decoding PEM data...
	I1001 16:36:12.912264    4081 main.go:141] libmachine: Parsing certificate...
	I1001 16:36:12.912788    4081 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:36:13.081229    4081 main.go:141] libmachine: Creating SSH key...
	I1001 16:36:13.171430    4081 main.go:141] libmachine: Creating Disk image...
	I1001 16:36:13.171438    4081 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:36:13.171692    4081 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/disk.qcow2
	I1001 16:36:13.180874    4081 main.go:141] libmachine: STDOUT: 
	I1001 16:36:13.180895    4081 main.go:141] libmachine: STDERR: 
	I1001 16:36:13.180969    4081 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/disk.qcow2 +20000M
	I1001 16:36:13.188657    4081 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:36:13.188679    4081 main.go:141] libmachine: STDERR: 
	I1001 16:36:13.188697    4081 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/disk.qcow2
	I1001 16:36:13.188702    4081 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:36:13.188711    4081 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:36:13.188737    4081 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:a5:75:31:80:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/disk.qcow2
	I1001 16:36:13.190271    4081 main.go:141] libmachine: STDOUT: 
	I1001 16:36:13.190298    4081 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:36:13.190309    4081 client.go:171] duration metric: took 278.369167ms to LocalClient.Create
	I1001 16:36:15.192504    4081 start.go:128] duration metric: took 2.349319166s to createHost
	I1001 16:36:15.192576    4081 start.go:83] releasing machines lock for "multinode-603000", held for 2.349854667s
	W1001 16:36:15.193043    4081 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-603000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-603000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:36:15.208668    4081 out.go:201] 
	W1001 16:36:15.213863    4081 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:36:15.213887    4081 out.go:270] * 
	* 
	W1001 16:36:15.216334    4081 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:36:15.228808    4081 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-603000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000: exit status 7 (70.718458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (113.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (129.114375ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-603000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- rollout status deployment/busybox: exit status 1 (56.960959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.861667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 16:36:15.556679    1659 retry.go:31] will retry after 982.036234ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.937083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 16:36:16.644018    1659 retry.go:31] will retry after 1.682889903s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.292542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 16:36:18.434557    1659 retry.go:31] will retry after 1.641959336s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.230834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 16:36:20.183101    1659 retry.go:31] will retry after 3.836993557s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.231375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 16:36:24.123614    1659 retry.go:31] will retry after 5.188906048s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.178041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 16:36:29.420121    1659 retry.go:31] will retry after 8.717433474s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (99.759875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 16:36:38.239629    1659 retry.go:31] will retry after 9.817149124s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.088292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 16:36:48.161246    1659 retry.go:31] will retry after 25.107076292s: failed to retrieve Pod IPs (may be temporary): exit status 1
E1001 16:37:02.583227    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.234375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 16:37:13.374774    1659 retry.go:31] will retry after 20.133538152s: failed to retrieve Pod IPs (may be temporary): exit status 1
E1001 16:37:14.606498    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.042375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1001 16:37:33.614537    1659 retry.go:31] will retry after 34.956398491s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.269041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.259792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.121542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.791208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.992916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000: exit status 7 (30.0955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (113.62s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-603000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.324583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000: exit status 7 (30.350209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-603000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-603000 -v 3 --alsologtostderr: exit status 83 (43.6125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-603000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-603000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:38:09.049591    4221 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:38:09.049763    4221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:09.049766    4221 out.go:358] Setting ErrFile to fd 2...
	I1001 16:38:09.049769    4221 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:09.049905    4221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:38:09.050142    4221 mustload.go:65] Loading cluster: multinode-603000
	I1001 16:38:09.050351    4221 config.go:182] Loaded profile config "multinode-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:38:09.054939    4221 out.go:177] * The control-plane node multinode-603000 host is not running: state=Stopped
	I1001 16:38:09.058789    4221 out.go:177]   To start a cluster, run: "minikube start -p multinode-603000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-603000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000: exit status 7 (30.47425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-603000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-603000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.559584ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-603000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-603000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-603000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000: exit status 7 (30.239542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-603000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-603000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-603000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVM
NUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-603000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesV
ersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\
":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000: exit status 7 (30.106084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-603000 status --output json --alsologtostderr: exit status 7 (30.033833ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-603000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:38:09.259603    4233 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:38:09.259751    4233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:09.259755    4233 out.go:358] Setting ErrFile to fd 2...
	I1001 16:38:09.259757    4233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:09.259877    4233 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:38:09.259994    4233 out.go:352] Setting JSON to true
	I1001 16:38:09.260004    4233 mustload.go:65] Loading cluster: multinode-603000
	I1001 16:38:09.260070    4233 notify.go:220] Checking for updates...
	I1001 16:38:09.260215    4233 config.go:182] Loaded profile config "multinode-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:38:09.260223    4233 status.go:174] checking status of multinode-603000 ...
	I1001 16:38:09.260444    4233 status.go:371] multinode-603000 host status = "Stopped" (err=<nil>)
	I1001 16:38:09.260448    4233 status.go:384] host is not running, skipping remaining checks
	I1001 16:38:09.260450    4233 status.go:176] multinode-603000 status: &{Name:multinode-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-603000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000: exit status 7 (29.552916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-603000 node stop m03: exit status 85 (45.279ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-603000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-603000 status: exit status 7 (30.491ms)

                                                
                                                
-- stdout --
	multinode-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-603000 status --alsologtostderr: exit status 7 (29.446833ms)

                                                
                                                
-- stdout --
	multinode-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:38:09.395048    4241 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:38:09.395183    4241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:09.395186    4241 out.go:358] Setting ErrFile to fd 2...
	I1001 16:38:09.395189    4241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:09.395338    4241 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:38:09.395456    4241 out.go:352] Setting JSON to false
	I1001 16:38:09.395467    4241 mustload.go:65] Loading cluster: multinode-603000
	I1001 16:38:09.395536    4241 notify.go:220] Checking for updates...
	I1001 16:38:09.395679    4241 config.go:182] Loaded profile config "multinode-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:38:09.395687    4241 status.go:174] checking status of multinode-603000 ...
	I1001 16:38:09.395919    4241 status.go:371] multinode-603000 host status = "Stopped" (err=<nil>)
	I1001 16:38:09.395922    4241 status.go:384] host is not running, skipping remaining checks
	I1001 16:38:09.395924    4241 status.go:176] multinode-603000 status: &{Name:multinode-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-603000 status --alsologtostderr": multinode-603000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000: exit status 7 (29.292458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-603000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.351542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:38:09.454885    4245 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:38:09.455124    4245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:09.455127    4245 out.go:358] Setting ErrFile to fd 2...
	I1001 16:38:09.455130    4245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:09.455269    4245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:38:09.455515    4245 mustload.go:65] Loading cluster: multinode-603000
	I1001 16:38:09.455712    4245 config.go:182] Loaded profile config "multinode-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:38:09.459906    4245 out.go:201] 
	W1001 16:38:09.462782    4245 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1001 16:38:09.462788    4245 out.go:270] * 
	* 
	W1001 16:38:09.464461    4245 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:38:09.467829    4245 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1001 16:38:09.454885    4245 out.go:345] Setting OutFile to fd 1 ...
I1001 16:38:09.455124    4245 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 16:38:09.455127    4245 out.go:358] Setting ErrFile to fd 2...
I1001 16:38:09.455130    4245 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 16:38:09.455269    4245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
I1001 16:38:09.455515    4245 mustload.go:65] Loading cluster: multinode-603000
I1001 16:38:09.455712    4245 config.go:182] Loaded profile config "multinode-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 16:38:09.459906    4245 out.go:201] 
W1001 16:38:09.462782    4245 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1001 16:38:09.462788    4245 out.go:270] * 
* 
W1001 16:38:09.464461    4245 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1001 16:38:09.467829    4245 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-603000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-603000 status -v=7 --alsologtostderr: exit status 7 (30.185ms)

                                                
                                                
-- stdout --
	multinode-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:38:09.501337    4247 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:38:09.501506    4247 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:09.501509    4247 out.go:358] Setting ErrFile to fd 2...
	I1001 16:38:09.501512    4247 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:09.501634    4247 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:38:09.501747    4247 out.go:352] Setting JSON to false
	I1001 16:38:09.501760    4247 mustload.go:65] Loading cluster: multinode-603000
	I1001 16:38:09.501820    4247 notify.go:220] Checking for updates...
	I1001 16:38:09.501974    4247 config.go:182] Loaded profile config "multinode-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:38:09.501984    4247 status.go:174] checking status of multinode-603000 ...
	I1001 16:38:09.502230    4247 status.go:371] multinode-603000 host status = "Stopped" (err=<nil>)
	I1001 16:38:09.502234    4247 status.go:384] host is not running, skipping remaining checks
	I1001 16:38:09.502236    4247 status.go:176] multinode-603000 status: &{Name:multinode-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1001 16:38:09.503101    1659 retry.go:31] will retry after 1.295808108s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-603000 status -v=7 --alsologtostderr: exit status 7 (72.607ms)

                                                
                                                
-- stdout --
	multinode-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:38:10.871700    4251 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:38:10.871942    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:10.871947    4251 out.go:358] Setting ErrFile to fd 2...
	I1001 16:38:10.871951    4251 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:10.872145    4251 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:38:10.872304    4251 out.go:352] Setting JSON to false
	I1001 16:38:10.872319    4251 mustload.go:65] Loading cluster: multinode-603000
	I1001 16:38:10.872360    4251 notify.go:220] Checking for updates...
	I1001 16:38:10.872592    4251 config.go:182] Loaded profile config "multinode-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:38:10.872613    4251 status.go:174] checking status of multinode-603000 ...
	I1001 16:38:10.872925    4251 status.go:371] multinode-603000 host status = "Stopped" (err=<nil>)
	I1001 16:38:10.872930    4251 status.go:384] host is not running, skipping remaining checks
	I1001 16:38:10.872933    4251 status.go:176] multinode-603000 status: &{Name:multinode-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1001 16:38:10.873951    1659 retry.go:31] will retry after 996.117764ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-603000 status -v=7 --alsologtostderr: exit status 7 (72.964958ms)

                                                
                                                
-- stdout --
	multinode-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:38:11.942942    4253 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:38:11.943194    4253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:11.943199    4253 out.go:358] Setting ErrFile to fd 2...
	I1001 16:38:11.943202    4253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:11.943406    4253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:38:11.943598    4253 out.go:352] Setting JSON to false
	I1001 16:38:11.943614    4253 mustload.go:65] Loading cluster: multinode-603000
	I1001 16:38:11.943654    4253 notify.go:220] Checking for updates...
	I1001 16:38:11.943943    4253 config.go:182] Loaded profile config "multinode-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:38:11.943961    4253 status.go:174] checking status of multinode-603000 ...
	I1001 16:38:11.944292    4253 status.go:371] multinode-603000 host status = "Stopped" (err=<nil>)
	I1001 16:38:11.944297    4253 status.go:384] host is not running, skipping remaining checks
	I1001 16:38:11.944299    4253 status.go:176] multinode-603000 status: &{Name:multinode-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1001 16:38:11.945389    1659 retry.go:31] will retry after 1.726076207s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-603000 status -v=7 --alsologtostderr: exit status 7 (72.801292ms)

                                                
                                                
-- stdout --
	multinode-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:38:13.744504    4255 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:38:13.744694    4255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:13.744698    4255 out.go:358] Setting ErrFile to fd 2...
	I1001 16:38:13.744701    4255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:13.744876    4255 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:38:13.745032    4255 out.go:352] Setting JSON to false
	I1001 16:38:13.745046    4255 mustload.go:65] Loading cluster: multinode-603000
	I1001 16:38:13.745095    4255 notify.go:220] Checking for updates...
	I1001 16:38:13.745316    4255 config.go:182] Loaded profile config "multinode-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:38:13.745331    4255 status.go:174] checking status of multinode-603000 ...
	I1001 16:38:13.745652    4255 status.go:371] multinode-603000 host status = "Stopped" (err=<nil>)
	I1001 16:38:13.745657    4255 status.go:384] host is not running, skipping remaining checks
	I1001 16:38:13.745660    4255 status.go:176] multinode-603000 status: &{Name:multinode-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1001 16:38:13.746754    1659 retry.go:31] will retry after 4.612448025s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-603000 status -v=7 --alsologtostderr: exit status 7 (71.052125ms)

                                                
                                                
-- stdout --
	multinode-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:38:18.430447    4257 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:38:18.430617    4257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:18.430621    4257 out.go:358] Setting ErrFile to fd 2...
	I1001 16:38:18.430624    4257 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:18.430819    4257 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:38:18.430972    4257 out.go:352] Setting JSON to false
	I1001 16:38:18.430985    4257 mustload.go:65] Loading cluster: multinode-603000
	I1001 16:38:18.431021    4257 notify.go:220] Checking for updates...
	I1001 16:38:18.431263    4257 config.go:182] Loaded profile config "multinode-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:38:18.431275    4257 status.go:174] checking status of multinode-603000 ...
	I1001 16:38:18.431589    4257 status.go:371] multinode-603000 host status = "Stopped" (err=<nil>)
	I1001 16:38:18.431594    4257 status.go:384] host is not running, skipping remaining checks
	I1001 16:38:18.431596    4257 status.go:176] multinode-603000 status: &{Name:multinode-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1001 16:38:18.432633    1659 retry.go:31] will retry after 3.496293926s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-603000 status -v=7 --alsologtostderr: exit status 7 (72.557041ms)

                                                
                                                
-- stdout --
	multinode-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:38:22.001841    4261 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:38:22.001998    4261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:22.002002    4261 out.go:358] Setting ErrFile to fd 2...
	I1001 16:38:22.002005    4261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:22.002197    4261 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:38:22.002355    4261 out.go:352] Setting JSON to false
	I1001 16:38:22.002369    4261 mustload.go:65] Loading cluster: multinode-603000
	I1001 16:38:22.002420    4261 notify.go:220] Checking for updates...
	I1001 16:38:22.002648    4261 config.go:182] Loaded profile config "multinode-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:38:22.002659    4261 status.go:174] checking status of multinode-603000 ...
	I1001 16:38:22.002970    4261 status.go:371] multinode-603000 host status = "Stopped" (err=<nil>)
	I1001 16:38:22.002975    4261 status.go:384] host is not running, skipping remaining checks
	I1001 16:38:22.002978    4261 status.go:176] multinode-603000 status: &{Name:multinode-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1001 16:38:22.003971    1659 retry.go:31] will retry after 10.671528113s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-603000 status -v=7 --alsologtostderr: exit status 7 (72.942166ms)

                                                
                                                
-- stdout --
	multinode-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:38:32.748321    4265 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:38:32.748575    4265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:32.748580    4265 out.go:358] Setting ErrFile to fd 2...
	I1001 16:38:32.748584    4265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:32.748808    4265 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:38:32.748996    4265 out.go:352] Setting JSON to false
	I1001 16:38:32.749012    4265 mustload.go:65] Loading cluster: multinode-603000
	I1001 16:38:32.749063    4265 notify.go:220] Checking for updates...
	I1001 16:38:32.749332    4265 config.go:182] Loaded profile config "multinode-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:38:32.749346    4265 status.go:174] checking status of multinode-603000 ...
	I1001 16:38:32.749668    4265 status.go:371] multinode-603000 host status = "Stopped" (err=<nil>)
	I1001 16:38:32.749673    4265 status.go:384] host is not running, skipping remaining checks
	I1001 16:38:32.749676    4265 status.go:176] multinode-603000 status: &{Name:multinode-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1001 16:38:32.750809    1659 retry.go:31] will retry after 15.613962276s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-603000 status -v=7 --alsologtostderr: exit status 7 (75.11825ms)

                                                
                                                
-- stdout --
	multinode-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:38:48.440178    4274 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:38:48.440362    4274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:48.440366    4274 out.go:358] Setting ErrFile to fd 2...
	I1001 16:38:48.440369    4274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:48.440546    4274 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:38:48.440717    4274 out.go:352] Setting JSON to false
	I1001 16:38:48.440731    4274 mustload.go:65] Loading cluster: multinode-603000
	I1001 16:38:48.440771    4274 notify.go:220] Checking for updates...
	I1001 16:38:48.440998    4274 config.go:182] Loaded profile config "multinode-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:38:48.441011    4274 status.go:174] checking status of multinode-603000 ...
	I1001 16:38:48.441345    4274 status.go:371] multinode-603000 host status = "Stopped" (err=<nil>)
	I1001 16:38:48.441350    4274 status.go:384] host is not running, skipping remaining checks
	I1001 16:38:48.441353    4274 status.go:176] multinode-603000 status: &{Name:multinode-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-603000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000: exit status 7 (33.116375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (39.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-603000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-603000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-603000: (3.410763959s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-603000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-603000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.222245667s)

                                                
                                                
-- stdout --
	* [multinode-603000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-603000" primary control-plane node in "multinode-603000" cluster
	* Restarting existing qemu2 VM for "multinode-603000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-603000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:38:51.977859    4298 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:38:51.978010    4298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:51.978015    4298 out.go:358] Setting ErrFile to fd 2...
	I1001 16:38:51.978018    4298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:51.978205    4298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:38:51.979461    4298 out.go:352] Setting JSON to false
	I1001 16:38:51.998645    4298 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4099,"bootTime":1727821832,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:38:51.998724    4298 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:38:52.003464    4298 out.go:177] * [multinode-603000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:38:52.010544    4298 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:38:52.010550    4298 notify.go:220] Checking for updates...
	I1001 16:38:52.016380    4298 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:38:52.019478    4298 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:38:52.022352    4298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:38:52.025436    4298 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:38:52.028407    4298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:38:52.031714    4298 config.go:182] Loaded profile config "multinode-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:38:52.031775    4298 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:38:52.036384    4298 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 16:38:52.043376    4298 start.go:297] selected driver: qemu2
	I1001 16:38:52.043384    4298 start.go:901] validating driver "qemu2" against &{Name:multinode-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:multinode-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:38:52.043473    4298 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:38:52.045895    4298 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:38:52.045921    4298 cni.go:84] Creating CNI manager for ""
	I1001 16:38:52.045949    4298 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 16:38:52.045993    4298 start.go:340] cluster config:
	{Name:multinode-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-603000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:38:52.049863    4298 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:38:52.057213    4298 out.go:177] * Starting "multinode-603000" primary control-plane node in "multinode-603000" cluster
	I1001 16:38:52.061378    4298 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:38:52.061393    4298 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:38:52.061406    4298 cache.go:56] Caching tarball of preloaded images
	I1001 16:38:52.061486    4298 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:38:52.061492    4298 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:38:52.061582    4298 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/multinode-603000/config.json ...
	I1001 16:38:52.062059    4298 start.go:360] acquireMachinesLock for multinode-603000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:38:52.062113    4298 start.go:364] duration metric: took 47µs to acquireMachinesLock for "multinode-603000"
	I1001 16:38:52.062123    4298 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:38:52.062128    4298 fix.go:54] fixHost starting: 
	I1001 16:38:52.062256    4298 fix.go:112] recreateIfNeeded on multinode-603000: state=Stopped err=<nil>
	W1001 16:38:52.062266    4298 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:38:52.070341    4298 out.go:177] * Restarting existing qemu2 VM for "multinode-603000" ...
	I1001 16:38:52.074434    4298 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:38:52.074490    4298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:a5:75:31:80:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/disk.qcow2
	I1001 16:38:52.077053    4298 main.go:141] libmachine: STDOUT: 
	I1001 16:38:52.077084    4298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:38:52.077121    4298 fix.go:56] duration metric: took 14.9905ms for fixHost
	I1001 16:38:52.077127    4298 start.go:83] releasing machines lock for "multinode-603000", held for 15.007834ms
	W1001 16:38:52.077136    4298 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:38:52.077179    4298 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:38:52.077184    4298 start.go:729] Will try again in 5 seconds ...
	I1001 16:38:57.079301    4298 start.go:360] acquireMachinesLock for multinode-603000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:38:57.079701    4298 start.go:364] duration metric: took 328.209µs to acquireMachinesLock for "multinode-603000"
	I1001 16:38:57.079818    4298 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:38:57.079837    4298 fix.go:54] fixHost starting: 
	I1001 16:38:57.080474    4298 fix.go:112] recreateIfNeeded on multinode-603000: state=Stopped err=<nil>
	W1001 16:38:57.080501    4298 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:38:57.086029    4298 out.go:177] * Restarting existing qemu2 VM for "multinode-603000" ...
	I1001 16:38:57.093953    4298 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:38:57.094149    4298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:a5:75:31:80:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/disk.qcow2
	I1001 16:38:57.103202    4298 main.go:141] libmachine: STDOUT: 
	I1001 16:38:57.103283    4298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:38:57.103364    4298 fix.go:56] duration metric: took 23.529167ms for fixHost
	I1001 16:38:57.103386    4298 start.go:83] releasing machines lock for "multinode-603000", held for 23.657ms
	W1001 16:38:57.103612    4298 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:38:57.110931    4298 out.go:201] 
	W1001 16:38:57.114993    4298 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:38:57.115019    4298 out.go:270] * 
	* 
	W1001 16:38:57.117940    4298 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:38:57.125744    4298 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-603000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-603000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000: exit status 7 (32.800792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-603000 node delete m03: exit status 83 (40.694875ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-603000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-603000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-603000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-603000 status --alsologtostderr: exit status 7 (30.1705ms)

                                                
                                                
-- stdout --
	multinode-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:38:57.310669    4312 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:38:57.310818    4312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:57.310822    4312 out.go:358] Setting ErrFile to fd 2...
	I1001 16:38:57.310824    4312 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:38:57.310969    4312 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:38:57.311095    4312 out.go:352] Setting JSON to false
	I1001 16:38:57.311107    4312 mustload.go:65] Loading cluster: multinode-603000
	I1001 16:38:57.311160    4312 notify.go:220] Checking for updates...
	I1001 16:38:57.311324    4312 config.go:182] Loaded profile config "multinode-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:38:57.311333    4312 status.go:174] checking status of multinode-603000 ...
	I1001 16:38:57.311579    4312 status.go:371] multinode-603000 host status = "Stopped" (err=<nil>)
	I1001 16:38:57.311582    4312 status.go:384] host is not running, skipping remaining checks
	I1001 16:38:57.311584    4312 status.go:176] multinode-603000 status: &{Name:multinode-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-603000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000: exit status 7 (29.847334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-603000 stop: (3.045799625s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-603000 status: exit status 7 (64.767458ms)

                                                
                                                
-- stdout --
	multinode-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-603000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-603000 status --alsologtostderr: exit status 7 (31.895625ms)

                                                
                                                
-- stdout --
	multinode-603000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:39:00.483577    4338 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:39:00.483740    4338 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:39:00.483743    4338 out.go:358] Setting ErrFile to fd 2...
	I1001 16:39:00.483746    4338 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:39:00.483847    4338 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:39:00.483961    4338 out.go:352] Setting JSON to false
	I1001 16:39:00.483972    4338 mustload.go:65] Loading cluster: multinode-603000
	I1001 16:39:00.484030    4338 notify.go:220] Checking for updates...
	I1001 16:39:00.484195    4338 config.go:182] Loaded profile config "multinode-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:39:00.484204    4338 status.go:174] checking status of multinode-603000 ...
	I1001 16:39:00.484436    4338 status.go:371] multinode-603000 host status = "Stopped" (err=<nil>)
	I1001 16:39:00.484440    4338 status.go:384] host is not running, skipping remaining checks
	I1001 16:39:00.484441    4338 status.go:176] multinode-603000 status: &{Name:multinode-603000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-603000 status --alsologtostderr": multinode-603000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-603000 status --alsologtostderr": multinode-603000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000: exit status 7 (30.085333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-603000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-603000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.181366542s)

                                                
                                                
-- stdout --
	* [multinode-603000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-603000" primary control-plane node in "multinode-603000" cluster
	* Restarting existing qemu2 VM for "multinode-603000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-603000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:39:00.543450    4342 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:39:00.543564    4342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:39:00.543567    4342 out.go:358] Setting ErrFile to fd 2...
	I1001 16:39:00.543570    4342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:39:00.543682    4342 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:39:00.544625    4342 out.go:352] Setting JSON to false
	I1001 16:39:00.560531    4342 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4108,"bootTime":1727821832,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:39:00.560604    4342 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:39:00.565245    4342 out.go:177] * [multinode-603000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:39:00.575234    4342 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:39:00.575286    4342 notify.go:220] Checking for updates...
	I1001 16:39:00.582081    4342 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:39:00.585169    4342 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:39:00.588195    4342 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:39:00.589463    4342 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:39:00.592157    4342 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:39:00.595510    4342 config.go:182] Loaded profile config "multinode-603000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:39:00.595796    4342 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:39:00.600038    4342 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 16:39:00.607143    4342 start.go:297] selected driver: qemu2
	I1001 16:39:00.607152    4342 start.go:901] validating driver "qemu2" against &{Name:multinode-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:multinode-603000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:39:00.607209    4342 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:39:00.609403    4342 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:39:00.609428    4342 cni.go:84] Creating CNI manager for ""
	I1001 16:39:00.609457    4342 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1001 16:39:00.609501    4342 start.go:340] cluster config:
	{Name:multinode-603000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-603000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:39:00.613045    4342 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:39:00.620168    4342 out.go:177] * Starting "multinode-603000" primary control-plane node in "multinode-603000" cluster
	I1001 16:39:00.624207    4342 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:39:00.624222    4342 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:39:00.624230    4342 cache.go:56] Caching tarball of preloaded images
	I1001 16:39:00.624303    4342 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:39:00.624310    4342 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:39:00.624378    4342 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/multinode-603000/config.json ...
	I1001 16:39:00.624819    4342 start.go:360] acquireMachinesLock for multinode-603000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:39:00.624855    4342 start.go:364] duration metric: took 29.584µs to acquireMachinesLock for "multinode-603000"
	I1001 16:39:00.624862    4342 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:39:00.624866    4342 fix.go:54] fixHost starting: 
	I1001 16:39:00.624990    4342 fix.go:112] recreateIfNeeded on multinode-603000: state=Stopped err=<nil>
	W1001 16:39:00.624998    4342 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:39:00.629158    4342 out.go:177] * Restarting existing qemu2 VM for "multinode-603000" ...
	I1001 16:39:00.637127    4342 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:39:00.637161    4342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:a5:75:31:80:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/disk.qcow2
	I1001 16:39:00.639146    4342 main.go:141] libmachine: STDOUT: 
	I1001 16:39:00.639184    4342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:39:00.639215    4342 fix.go:56] duration metric: took 14.346875ms for fixHost
	I1001 16:39:00.639220    4342 start.go:83] releasing machines lock for "multinode-603000", held for 14.361583ms
	W1001 16:39:00.639226    4342 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:39:00.639283    4342 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:39:00.639288    4342 start.go:729] Will try again in 5 seconds ...
	I1001 16:39:05.641432    4342 start.go:360] acquireMachinesLock for multinode-603000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:39:05.641989    4342 start.go:364] duration metric: took 462.833µs to acquireMachinesLock for "multinode-603000"
	I1001 16:39:05.642159    4342 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:39:05.642181    4342 fix.go:54] fixHost starting: 
	I1001 16:39:05.642877    4342 fix.go:112] recreateIfNeeded on multinode-603000: state=Stopped err=<nil>
	W1001 16:39:05.642905    4342 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:39:05.647387    4342 out.go:177] * Restarting existing qemu2 VM for "multinode-603000" ...
	I1001 16:39:05.654259    4342 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:39:05.654432    4342 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:a5:75:31:80:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/multinode-603000/disk.qcow2
	I1001 16:39:05.663830    4342 main.go:141] libmachine: STDOUT: 
	I1001 16:39:05.663891    4342 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:39:05.664006    4342 fix.go:56] duration metric: took 21.796083ms for fixHost
	I1001 16:39:05.664025    4342 start.go:83] releasing machines lock for "multinode-603000", held for 22.01325ms
	W1001 16:39:05.664185    4342 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-603000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:39:05.671395    4342 out.go:201] 
	W1001 16:39:05.675433    4342 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:39:05.675457    4342 out.go:270] * 
	* 
	W1001 16:39:05.678379    4342 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:39:05.684323    4342 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-603000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000: exit status 7 (68.8415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-603000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-603000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-603000-m01 --driver=qemu2 : exit status 80 (9.961093292s)

                                                
                                                
-- stdout --
	* [multinode-603000-m01] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-603000-m01" primary control-plane node in "multinode-603000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-603000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-603000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-603000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-603000-m02 --driver=qemu2 : exit status 80 (10.174318666s)

                                                
                                                
-- stdout --
	* [multinode-603000-m02] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-603000-m02" primary control-plane node in "multinode-603000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-603000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-603000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-603000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-603000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-603000: exit status 83 (83.283834ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-603000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-603000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-603000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-603000 -n multinode-603000: exit status 7 (29.400209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-603000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.37s)

                                                
                                    
x
+
TestPreload (9.98s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-950000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-950000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.8283545s)

                                                
                                                
-- stdout --
	* [test-preload-950000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-950000" primary control-plane node in "test-preload-950000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-950000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:39:26.273813    4401 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:39:26.273936    4401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:39:26.273939    4401 out.go:358] Setting ErrFile to fd 2...
	I1001 16:39:26.273941    4401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:39:26.274079    4401 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:39:26.275135    4401 out.go:352] Setting JSON to false
	I1001 16:39:26.291088    4401 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4134,"bootTime":1727821832,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:39:26.291162    4401 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:39:26.298206    4401 out.go:177] * [test-preload-950000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:39:26.306041    4401 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:39:26.306083    4401 notify.go:220] Checking for updates...
	I1001 16:39:26.313091    4401 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:39:26.316053    4401 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:39:26.319126    4401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:39:26.322070    4401 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:39:26.324982    4401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:39:26.328448    4401 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:39:26.328517    4401 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:39:26.333030    4401 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:39:26.340029    4401 start.go:297] selected driver: qemu2
	I1001 16:39:26.340037    4401 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:39:26.340051    4401 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:39:26.342244    4401 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:39:26.345066    4401 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:39:26.348144    4401 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:39:26.348166    4401 cni.go:84] Creating CNI manager for ""
	I1001 16:39:26.348200    4401 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:39:26.348211    4401 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 16:39:26.348233    4401 start.go:340] cluster config:
	{Name:test-preload-950000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-950000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:39:26.351834    4401 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:39:26.359084    4401 out.go:177] * Starting "test-preload-950000" primary control-plane node in "test-preload-950000" cluster
	I1001 16:39:26.363063    4401 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1001 16:39:26.363159    4401 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/test-preload-950000/config.json ...
	I1001 16:39:26.363186    4401 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/test-preload-950000/config.json: {Name:mk9bd659e64f6c1d6b37e14d2c3ffa92559e2ba9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:39:26.363190    4401 cache.go:107] acquiring lock: {Name:mk04d0efd994fa5cbd61ff37798e20026905d950 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:39:26.363195    4401 cache.go:107] acquiring lock: {Name:mk00b067b11722498df239e3069c4d6f00311100 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:39:26.363221    4401 cache.go:107] acquiring lock: {Name:mk339fde154d39bea4f687332d27aef6383ae5a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:39:26.363375    4401 cache.go:107] acquiring lock: {Name:mkf37578556691de4711e6fbf12b17973616f297 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:39:26.363390    4401 cache.go:107] acquiring lock: {Name:mk0d2d7d9fad359c1530731185194759b92c8150 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:39:26.363390    4401 cache.go:107] acquiring lock: {Name:mk3fdae6c7a423c1f065007d7e1b9f995e2797dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:39:26.363426    4401 cache.go:107] acquiring lock: {Name:mkb947967a15d3095491453075a76956d9f408cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:39:26.363422    4401 cache.go:107] acquiring lock: {Name:mk3bc39c16368702b9ffdfe4e1cf0bf941e56385 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:39:26.363648    4401 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 16:39:26.363676    4401 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1001 16:39:26.363684    4401 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1001 16:39:26.363749    4401 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1001 16:39:26.363813    4401 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1001 16:39:26.363824    4401 start.go:360] acquireMachinesLock for test-preload-950000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:39:26.363835    4401 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:39:26.363864    4401 start.go:364] duration metric: took 33.875µs to acquireMachinesLock for "test-preload-950000"
	I1001 16:39:26.363651    4401 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1001 16:39:26.363903    4401 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 16:39:26.363877    4401 start.go:93] Provisioning new machine with config: &{Name:test-preload-950000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.24.4 ClusterName:test-preload-950000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:39:26.363910    4401 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:39:26.372010    4401 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 16:39:26.377707    4401 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1001 16:39:26.377829    4401 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 16:39:26.378045    4401 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1001 16:39:26.378347    4401 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1001 16:39:26.379050    4401 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:39:26.379822    4401 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1001 16:39:26.379849    4401 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1001 16:39:26.379867    4401 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1001 16:39:26.392131    4401 start.go:159] libmachine.API.Create for "test-preload-950000" (driver="qemu2")
	I1001 16:39:26.392160    4401 client.go:168] LocalClient.Create starting
	I1001 16:39:26.392229    4401 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:39:26.392264    4401 main.go:141] libmachine: Decoding PEM data...
	I1001 16:39:26.392274    4401 main.go:141] libmachine: Parsing certificate...
	I1001 16:39:26.392328    4401 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:39:26.392353    4401 main.go:141] libmachine: Decoding PEM data...
	I1001 16:39:26.392361    4401 main.go:141] libmachine: Parsing certificate...
	I1001 16:39:26.392708    4401 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:39:26.551202    4401 main.go:141] libmachine: Creating SSH key...
	I1001 16:39:26.617628    4401 main.go:141] libmachine: Creating Disk image...
	I1001 16:39:26.617662    4401 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:39:26.617968    4401 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/disk.qcow2
	I1001 16:39:26.627406    4401 main.go:141] libmachine: STDOUT: 
	I1001 16:39:26.627422    4401 main.go:141] libmachine: STDERR: 
	I1001 16:39:26.627488    4401 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/disk.qcow2 +20000M
	I1001 16:39:26.636731    4401 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:39:26.636776    4401 main.go:141] libmachine: STDERR: 
	I1001 16:39:26.636794    4401 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/disk.qcow2
	I1001 16:39:26.636802    4401 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:39:26.636825    4401 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:39:26.636882    4401 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:2b:67:fd:45:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/disk.qcow2
	I1001 16:39:26.638593    4401 main.go:141] libmachine: STDOUT: 
	I1001 16:39:26.638611    4401 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:39:26.638628    4401 client.go:171] duration metric: took 246.466209ms to LocalClient.Create
	I1001 16:39:28.495707    4401 cache.go:162] opening:  /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1001 16:39:28.613424    4401 cache.go:162] opening:  /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1001 16:39:28.621493    4401 cache.go:162] opening:  /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1001 16:39:28.639547    4401 start.go:128] duration metric: took 2.275645041s to createHost
	I1001 16:39:28.639587    4401 start.go:83] releasing machines lock for "test-preload-950000", held for 2.275736542s
	W1001 16:39:28.639644    4401 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:39:28.645201    4401 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1001 16:39:28.645308    4401 cache.go:162] opening:  /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1001 16:39:28.658538    4401 out.go:177] * Deleting "test-preload-950000" in qemu2 ...
	W1001 16:39:28.690169    4401 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:39:28.690203    4401 start.go:729] Will try again in 5 seconds ...
	I1001 16:39:28.775095    4401 cache.go:157] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1001 16:39:28.775146    4401 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.411840834s
	I1001 16:39:28.775185    4401 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1001 16:39:29.029722    4401 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1001 16:39:29.029840    4401 cache.go:162] opening:  /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1001 16:39:29.144105    4401 cache.go:162] opening:  /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1001 16:39:29.191294    4401 cache.go:162] opening:  /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1001 16:39:29.201270    4401 cache.go:162] opening:  /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1001 16:39:29.964736    4401 cache.go:157] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1001 16:39:29.964790    4401 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.601439584s
	I1001 16:39:29.964816    4401 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1001 16:39:31.039322    4401 cache.go:157] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1001 16:39:31.039382    4401 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.67607975s
	I1001 16:39:31.039408    4401 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1001 16:39:31.094028    4401 cache.go:157] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1001 16:39:31.094081    4401 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.730939542s
	I1001 16:39:31.094111    4401 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1001 16:39:32.849134    4401 cache.go:157] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1001 16:39:32.849188    4401 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.485820625s
	I1001 16:39:32.849211    4401 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1001 16:39:33.301109    4401 cache.go:157] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1001 16:39:33.301176    4401 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.938043125s
	I1001 16:39:33.301205    4401 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1001 16:39:33.443191    4401 cache.go:157] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1001 16:39:33.443232    4401 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 7.080120792s
	I1001 16:39:33.443259    4401 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1001 16:39:33.690350    4401 start.go:360] acquireMachinesLock for test-preload-950000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:39:33.690782    4401 start.go:364] duration metric: took 368.458µs to acquireMachinesLock for "test-preload-950000"
	I1001 16:39:33.690879    4401 start.go:93] Provisioning new machine with config: &{Name:test-preload-950000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.24.4 ClusterName:test-preload-950000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:39:33.691118    4401 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:39:33.711601    4401 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 16:39:33.761817    4401 start.go:159] libmachine.API.Create for "test-preload-950000" (driver="qemu2")
	I1001 16:39:33.761863    4401 client.go:168] LocalClient.Create starting
	I1001 16:39:33.761975    4401 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:39:33.762044    4401 main.go:141] libmachine: Decoding PEM data...
	I1001 16:39:33.762065    4401 main.go:141] libmachine: Parsing certificate...
	I1001 16:39:33.762132    4401 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:39:33.762175    4401 main.go:141] libmachine: Decoding PEM data...
	I1001 16:39:33.762193    4401 main.go:141] libmachine: Parsing certificate...
	I1001 16:39:33.762705    4401 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:39:33.931779    4401 main.go:141] libmachine: Creating SSH key...
	I1001 16:39:34.013699    4401 main.go:141] libmachine: Creating Disk image...
	I1001 16:39:34.013705    4401 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:39:34.013933    4401 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/disk.qcow2
	I1001 16:39:34.023363    4401 main.go:141] libmachine: STDOUT: 
	I1001 16:39:34.023383    4401 main.go:141] libmachine: STDERR: 
	I1001 16:39:34.023445    4401 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/disk.qcow2 +20000M
	I1001 16:39:34.031540    4401 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:39:34.031557    4401 main.go:141] libmachine: STDERR: 
	I1001 16:39:34.031570    4401 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/disk.qcow2
	I1001 16:39:34.031574    4401 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:39:34.031583    4401 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:39:34.031628    4401 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:e4:8d:df:70:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/test-preload-950000/disk.qcow2
	I1001 16:39:34.033295    4401 main.go:141] libmachine: STDOUT: 
	I1001 16:39:34.033309    4401 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:39:34.033321    4401 client.go:171] duration metric: took 271.456792ms to LocalClient.Create
	I1001 16:39:36.033741    4401 start.go:128] duration metric: took 2.342594541s to createHost
	I1001 16:39:36.033821    4401 start.go:83] releasing machines lock for "test-preload-950000", held for 2.343037542s
	W1001 16:39:36.034190    4401 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-950000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-950000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:39:36.043816    4401 out.go:201] 
	W1001 16:39:36.047837    4401 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:39:36.047924    4401 out.go:270] * 
	* 
	W1001 16:39:36.051021    4401 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:39:36.058652    4401 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-950000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-10-01 16:39:36.076933 -0700 PDT m=+3168.966111501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-950000 -n test-preload-950000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-950000 -n test-preload-950000: exit status 7 (65.10175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-950000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-950000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-950000
--- FAIL: TestPreload (9.98s)

                                                
                                    
x
+
TestScheduledStopUnix (10.08s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-936000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-936000 --memory=2048 --driver=qemu2 : exit status 80 (9.928357041s)

                                                
                                                
-- stdout --
	* [scheduled-stop-936000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-936000" primary control-plane node in "scheduled-stop-936000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-936000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-936000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-936000" primary control-plane node in "scheduled-stop-936000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-936000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-10-01 16:39:46.152985 -0700 PDT m=+3179.042267042
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-936000 -n scheduled-stop-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-936000 -n scheduled-stop-936000: exit status 7 (67.440375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-936000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-936000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-936000
--- FAIL: TestScheduledStopUnix (10.08s)

                                                
                                    
x
+
TestSkaffold (16.44s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe4082269065 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe4082269065 version: (1.06575975s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-534000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-534000 --memory=2600 --driver=qemu2 : exit status 80 (10.036161709s)

                                                
                                                
-- stdout --
	* [skaffold-534000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-534000" primary control-plane node in "skaffold-534000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-534000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-534000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-534000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-534000" primary control-plane node in "skaffold-534000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-534000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-534000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-10-01 16:40:02.602115 -0700 PDT m=+3195.491565917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-534000 -n skaffold-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-534000 -n skaffold-534000: exit status 7 (62.409042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-534000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-534000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-534000
--- FAIL: TestSkaffold (16.44s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (708.87s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2014786995 start -p running-upgrade-193000 --memory=2200 --vm-driver=qemu2 
E1001 16:42:02.580204    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:42:14.603341    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2014786995 start -p running-upgrade-193000 --memory=2200 --vm-driver=qemu2 : exit status 90 (2m4.024615625s)

                                                
                                                
-- stdout --
	* [running-upgrade-193000] minikube v1.26.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/legacy_kubeconfig772260030
	* Using the qemu2 (experimental) driver based on user configuration
	* Downloading VM boot image ...
	* minikube 1.34.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.34.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	* Starting control plane node running-upgrade-193000 in cluster running-upgrade-193000
	* Downloading Kubernetes v1.24.1 preload ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-10-01T23:42:57Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/cri-dockerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
I1001 16:42:57.520758    1659 retry.go:31] will retry after 1.245615134s: exit status 90
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2014786995 start -p running-upgrade-193000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2014786995 start -p running-upgrade-193000 --memory=2200 --vm-driver=qemu2 : (34.07142675s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-193000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-193000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m32.102098208s)

                                                
                                                
-- stdout --
	* [running-upgrade-193000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-193000" primary control-plane node in "running-upgrade-193000" cluster
	* Updating the running qemu2 "running-upgrade-193000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:43:32.869727    4804 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:43:32.869861    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:43:32.869864    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:43:32.869866    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:43:32.870023    4804 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:43:32.871112    4804 out.go:352] Setting JSON to false
	I1001 16:43:32.887897    4804 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4380,"bootTime":1727821832,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:43:32.887977    4804 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:43:32.892400    4804 out.go:177] * [running-upgrade-193000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:43:32.899424    4804 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:43:32.899484    4804 notify.go:220] Checking for updates...
	I1001 16:43:32.907350    4804 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:43:32.910352    4804 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:43:32.913285    4804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:43:32.916375    4804 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:43:32.917437    4804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:43:32.920621    4804 config.go:182] Loaded profile config "running-upgrade-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 16:43:32.924300    4804 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1001 16:43:32.927375    4804 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:43:32.930310    4804 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 16:43:32.937218    4804 start.go:297] selected driver: qemu2
	I1001 16:43:32.937225    4804 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50304 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 16:43:32.937302    4804 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:43:32.939454    4804 cni.go:84] Creating CNI manager for ""
	I1001 16:43:32.939495    4804 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:43:32.939518    4804 start.go:340] cluster config:
	{Name:running-upgrade-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50304 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 16:43:32.939568    4804 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:43:32.948337    4804 out.go:177] * Starting "running-upgrade-193000" primary control-plane node in "running-upgrade-193000" cluster
	I1001 16:43:32.952345    4804 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1001 16:43:32.952360    4804 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1001 16:43:32.952371    4804 cache.go:56] Caching tarball of preloaded images
	I1001 16:43:32.952420    4804 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:43:32.952426    4804 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1001 16:43:32.952497    4804 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/config.json ...
	I1001 16:43:32.952927    4804 start.go:360] acquireMachinesLock for running-upgrade-193000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:43:32.952957    4804 start.go:364] duration metric: took 25.292µs to acquireMachinesLock for "running-upgrade-193000"
	I1001 16:43:32.952964    4804 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:43:32.952968    4804 fix.go:54] fixHost starting: 
	I1001 16:43:32.953528    4804 fix.go:112] recreateIfNeeded on running-upgrade-193000: state=Running err=<nil>
	W1001 16:43:32.953536    4804 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:43:32.962298    4804 out.go:177] * Updating the running qemu2 "running-upgrade-193000" VM ...
	I1001 16:43:32.966338    4804 machine.go:93] provisionDockerMachine start ...
	I1001 16:43:32.966371    4804 main.go:141] libmachine: Using SSH client type: native
	I1001 16:43:32.966474    4804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100555c00] 0x100558440 <nil>  [] 0s} localhost 50233 <nil> <nil>}
	I1001 16:43:32.966479    4804 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 16:43:33.017501    4804 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-193000
	
	I1001 16:43:33.017520    4804 buildroot.go:166] provisioning hostname "running-upgrade-193000"
	I1001 16:43:33.017573    4804 main.go:141] libmachine: Using SSH client type: native
	I1001 16:43:33.017694    4804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100555c00] 0x100558440 <nil>  [] 0s} localhost 50233 <nil> <nil>}
	I1001 16:43:33.017699    4804 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-193000 && echo "running-upgrade-193000" | sudo tee /etc/hostname
	I1001 16:43:33.070289    4804 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-193000
	
	I1001 16:43:33.070347    4804 main.go:141] libmachine: Using SSH client type: native
	I1001 16:43:33.070470    4804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100555c00] 0x100558440 <nil>  [] 0s} localhost 50233 <nil> <nil>}
	I1001 16:43:33.070478    4804 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-193000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-193000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-193000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 16:43:33.120350    4804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 16:43:33.120362    4804 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19740-1141/.minikube CaCertPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19740-1141/.minikube}
	I1001 16:43:33.120370    4804 buildroot.go:174] setting up certificates
	I1001 16:43:33.120374    4804 provision.go:84] configureAuth start
	I1001 16:43:33.120382    4804 provision.go:143] copyHostCerts
	I1001 16:43:33.120468    4804 exec_runner.go:144] found /Users/jenkins/minikube-integration/19740-1141/.minikube/cert.pem, removing ...
	I1001 16:43:33.120474    4804 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19740-1141/.minikube/cert.pem
	I1001 16:43:33.120590    4804 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19740-1141/.minikube/cert.pem (1123 bytes)
	I1001 16:43:33.120753    4804 exec_runner.go:144] found /Users/jenkins/minikube-integration/19740-1141/.minikube/key.pem, removing ...
	I1001 16:43:33.120756    4804 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19740-1141/.minikube/key.pem
	I1001 16:43:33.120800    4804 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19740-1141/.minikube/key.pem (1679 bytes)
	I1001 16:43:33.120900    4804 exec_runner.go:144] found /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.pem, removing ...
	I1001 16:43:33.120903    4804 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.pem
	I1001 16:43:33.120944    4804 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.pem (1078 bytes)
	I1001 16:43:33.121022    4804 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-193000 san=[127.0.0.1 localhost minikube running-upgrade-193000]
	I1001 16:43:33.306783    4804 provision.go:177] copyRemoteCerts
	I1001 16:43:33.306840    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 16:43:33.306849    4804 sshutil.go:53] new ssh client: &{IP:localhost Port:50233 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/running-upgrade-193000/id_rsa Username:docker}
	I1001 16:43:33.332995    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1001 16:43:33.339876    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 16:43:33.346674    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 16:43:33.353722    4804 provision.go:87] duration metric: took 233.342292ms to configureAuth
	I1001 16:43:33.353731    4804 buildroot.go:189] setting minikube options for container-runtime
	I1001 16:43:33.353845    4804 config.go:182] Loaded profile config "running-upgrade-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 16:43:33.353890    4804 main.go:141] libmachine: Using SSH client type: native
	I1001 16:43:33.353983    4804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100555c00] 0x100558440 <nil>  [] 0s} localhost 50233 <nil> <nil>}
	I1001 16:43:33.353988    4804 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1001 16:43:33.405967    4804 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1001 16:43:33.405978    4804 buildroot.go:70] root file system type: tmpfs
	I1001 16:43:33.406026    4804 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1001 16:43:33.406086    4804 main.go:141] libmachine: Using SSH client type: native
	I1001 16:43:33.406196    4804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100555c00] 0x100558440 <nil>  [] 0s} localhost 50233 <nil> <nil>}
	I1001 16:43:33.406231    4804 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1001 16:43:33.456534    4804 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1001 16:43:33.456592    4804 main.go:141] libmachine: Using SSH client type: native
	I1001 16:43:33.456714    4804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100555c00] 0x100558440 <nil>  [] 0s} localhost 50233 <nil> <nil>}
	I1001 16:43:33.456722    4804 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1001 16:43:33.508005    4804 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 16:43:33.508016    4804 machine.go:96] duration metric: took 541.678959ms to provisionDockerMachine
	I1001 16:43:33.508021    4804 start.go:293] postStartSetup for "running-upgrade-193000" (driver="qemu2")
	I1001 16:43:33.508027    4804 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 16:43:33.508082    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 16:43:33.508092    4804 sshutil.go:53] new ssh client: &{IP:localhost Port:50233 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/running-upgrade-193000/id_rsa Username:docker}
	I1001 16:43:33.534785    4804 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 16:43:33.536189    4804 info.go:137] Remote host: Buildroot 2021.02.12
	I1001 16:43:33.536198    4804 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19740-1141/.minikube/addons for local assets ...
	I1001 16:43:33.536263    4804 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19740-1141/.minikube/files for local assets ...
	I1001 16:43:33.536366    4804 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/ssl/certs/16592.pem -> 16592.pem in /etc/ssl/certs
	I1001 16:43:33.536461    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 16:43:33.539673    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/ssl/certs/16592.pem --> /etc/ssl/certs/16592.pem (1708 bytes)
	I1001 16:43:33.546430    4804 start.go:296] duration metric: took 38.403458ms for postStartSetup
	I1001 16:43:33.546443    4804 fix.go:56] duration metric: took 593.482208ms for fixHost
	I1001 16:43:33.546490    4804 main.go:141] libmachine: Using SSH client type: native
	I1001 16:43:33.546590    4804 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100555c00] 0x100558440 <nil>  [] 0s} localhost 50233 <nil> <nil>}
	I1001 16:43:33.546594    4804 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 16:43:33.597986    4804 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727826213.619585294
	
	I1001 16:43:33.597996    4804 fix.go:216] guest clock: 1727826213.619585294
	I1001 16:43:33.598000    4804 fix.go:229] Guest: 2024-10-01 16:43:33.619585294 -0700 PDT Remote: 2024-10-01 16:43:33.546446 -0700 PDT m=+0.695825168 (delta=73.139294ms)
	I1001 16:43:33.598019    4804 fix.go:200] guest clock delta is within tolerance: 73.139294ms
	I1001 16:43:33.598022    4804 start.go:83] releasing machines lock for "running-upgrade-193000", held for 645.067708ms
	I1001 16:43:33.598088    4804 ssh_runner.go:195] Run: cat /version.json
	I1001 16:43:33.598089    4804 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 16:43:33.598096    4804 sshutil.go:53] new ssh client: &{IP:localhost Port:50233 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/running-upgrade-193000/id_rsa Username:docker}
	I1001 16:43:33.598105    4804 sshutil.go:53] new ssh client: &{IP:localhost Port:50233 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/running-upgrade-193000/id_rsa Username:docker}
	W1001 16:43:33.598720    4804 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50233: connect: connection refused
	I1001 16:43:33.598749    4804 retry.go:31] will retry after 150.827078ms: dial tcp [::1]:50233: connect: connection refused
	W1001 16:43:33.778495    4804 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1001 16:43:33.778573    4804 ssh_runner.go:195] Run: systemctl --version
	I1001 16:43:33.780383    4804 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 16:43:33.781896    4804 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 16:43:33.781928    4804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1001 16:43:33.784838    4804 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1001 16:43:33.789499    4804 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 16:43:33.789506    4804 start.go:495] detecting cgroup driver to use...
	I1001 16:43:33.789575    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 16:43:33.794806    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1001 16:43:33.797633    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1001 16:43:33.800892    4804 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1001 16:43:33.800918    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1001 16:43:33.804471    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 16:43:33.807777    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1001 16:43:33.810896    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 16:43:33.815843    4804 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 16:43:33.818825    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1001 16:43:33.821749    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1001 16:43:33.826672    4804 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1001 16:43:33.829533    4804 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 16:43:33.832095    4804 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 16:43:33.834699    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:43:33.929409    4804 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1001 16:43:33.940542    4804 start.go:495] detecting cgroup driver to use...
	I1001 16:43:33.940623    4804 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1001 16:43:33.945889    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 16:43:33.950980    4804 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 16:43:33.959581    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 16:43:33.964515    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1001 16:43:33.969139    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 16:43:33.975491    4804 ssh_runner.go:195] Run: which cri-dockerd
	I1001 16:43:33.977214    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1001 16:43:33.980213    4804 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1001 16:43:33.984782    4804 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1001 16:43:34.076290    4804 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1001 16:43:34.160442    4804 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1001 16:43:34.160498    4804 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1001 16:43:34.166427    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:43:34.261845    4804 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1001 16:43:36.935496    4804 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.673661792s)
	I1001 16:43:36.935560    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1001 16:43:36.940332    4804 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1001 16:43:36.946802    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1001 16:43:36.952012    4804 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1001 16:43:37.031902    4804 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1001 16:43:37.111311    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:43:37.188014    4804 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1001 16:43:37.194249    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1001 16:43:37.198579    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:43:37.276503    4804 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1001 16:43:37.314815    4804 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1001 16:43:37.314909    4804 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1001 16:43:37.316898    4804 start.go:563] Will wait 60s for crictl version
	I1001 16:43:37.316950    4804 ssh_runner.go:195] Run: which crictl
	I1001 16:43:37.318416    4804 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 16:43:37.329963    4804 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1001 16:43:37.330051    4804 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1001 16:43:37.342966    4804 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1001 16:43:37.362506    4804 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1001 16:43:37.362652    4804 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1001 16:43:37.364015    4804 kubeadm.go:883] updating cluster {Name:running-upgrade-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50304 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1001 16:43:37.364063    4804 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1001 16:43:37.364112    4804 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1001 16:43:37.375007    4804 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1001 16:43:37.375015    4804 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1001 16:43:37.375070    4804 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1001 16:43:37.377940    4804 ssh_runner.go:195] Run: which lz4
	I1001 16:43:37.379230    4804 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 16:43:37.380488    4804 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 16:43:37.380500    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1001 16:43:38.334767    4804 docker.go:649] duration metric: took 955.595666ms to copy over tarball
	I1001 16:43:38.334830    4804 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 16:43:39.577329    4804 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.242498333s)
	I1001 16:43:39.577343    4804 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 16:43:39.593298    4804 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1001 16:43:39.596768    4804 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1001 16:43:39.601868    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:43:39.685588    4804 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1001 16:43:41.230173    4804 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.544585333s)
	I1001 16:43:41.230281    4804 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1001 16:43:41.243090    4804 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1001 16:43:41.243111    4804 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1001 16:43:41.243116    4804 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1001 16:43:41.247514    4804 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 16:43:41.249522    4804 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:43:41.251886    4804 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 16:43:41.252460    4804 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1001 16:43:41.253455    4804 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1001 16:43:41.253471    4804 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:43:41.254778    4804 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 16:43:41.255884    4804 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1001 16:43:41.255911    4804 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 16:43:41.256065    4804 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1001 16:43:41.257172    4804 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 16:43:41.257463    4804 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 16:43:41.258419    4804 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 16:43:41.258424    4804 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 16:43:41.259370    4804 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 16:43:41.260279    4804 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 16:43:43.182379    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1001 16:43:43.219438    4804 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1001 16:43:43.219493    4804 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 16:43:43.219621    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1001 16:43:43.240735    4804 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1001 16:43:43.253726    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 16:43:43.269848    4804 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1001 16:43:43.269873    4804 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 16:43:43.269950    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 16:43:43.282545    4804 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1001 16:43:43.305866    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1001 16:43:43.319382    4804 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1001 16:43:43.319404    4804 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 16:43:43.319478    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1001 16:43:43.331118    4804 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1001 16:43:43.337494    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1001 16:43:43.348926    4804 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1001 16:43:43.348958    4804 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1001 16:43:43.349025    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1001 16:43:43.359220    4804 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1001 16:43:43.359349    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1001 16:43:43.361062    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1001 16:43:43.361074    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1001 16:43:43.368697    4804 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1001 16:43:43.368706    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1001 16:43:43.396911    4804 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1001 16:43:43.577053    4804 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1001 16:43:43.577213    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:43:43.590084    4804 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1001 16:43:43.590109    4804 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:43:43.590195    4804 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:43:43.607309    4804 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1001 16:43:43.607443    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1001 16:43:43.609003    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1001 16:43:43.609014    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1001 16:43:43.643054    4804 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1001 16:43:43.643069    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	W1001 16:43:43.843521    4804 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1001 16:43:43.843652    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1001 16:43:43.850887    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1001 16:43:43.859781    4804 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1001 16:43:43.888854    4804 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1001 16:43:43.888901    4804 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1001 16:43:43.888897    4804 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1001 16:43:43.888916    4804 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1001 16:43:43.888927    4804 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1001 16:43:43.888916    4804 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 16:43:43.888916    4804 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 16:43:43.888992    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1001 16:43:43.888992    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1001 16:43:43.888992    4804 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1001 16:43:43.908899    4804 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1001 16:43:43.909035    4804 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1001 16:43:43.910074    4804 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1001 16:43:43.910203    4804 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1001 16:43:43.910993    4804 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1001 16:43:43.911006    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1001 16:43:43.955689    4804 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1001 16:43:43.955710    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1001 16:43:43.996725    4804 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1001 16:43:43.996769    4804 cache_images.go:92] duration metric: took 2.75367575s to LoadCachedImages
	W1001 16:43:43.996814    4804 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I1001 16:43:43.996821    4804 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1001 16:43:43.996867    4804 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-193000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 16:43:43.996938    4804 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1001 16:43:44.010349    4804 cni.go:84] Creating CNI manager for ""
	I1001 16:43:44.010364    4804 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:43:44.010368    4804 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 16:43:44.010377    4804 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-193000 NodeName:running-upgrade-193000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 16:43:44.010432    4804 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-193000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 16:43:44.010485    4804 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1001 16:43:44.013292    4804 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 16:43:44.013318    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 16:43:44.016547    4804 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1001 16:43:44.021607    4804 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 16:43:44.026404    4804 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1001 16:43:44.031889    4804 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1001 16:43:44.033195    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:43:44.113501    4804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 16:43:44.118908    4804 certs.go:68] Setting up /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000 for IP: 10.0.2.15
	I1001 16:43:44.118915    4804 certs.go:194] generating shared ca certs ...
	I1001 16:43:44.118924    4804 certs.go:226] acquiring lock for ca certs: {Name:mk74f46ad151665c6dd5cd39311b967c23e44dd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:43:44.119088    4804 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.key
	I1001 16:43:44.119129    4804 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/proxy-client-ca.key
	I1001 16:43:44.119134    4804 certs.go:256] generating profile certs ...
	I1001 16:43:44.119197    4804 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/client.key
	I1001 16:43:44.119216    4804 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/apiserver.key.e657745e
	I1001 16:43:44.119227    4804 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/apiserver.crt.e657745e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1001 16:43:44.163008    4804 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/apiserver.crt.e657745e ...
	I1001 16:43:44.163013    4804 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/apiserver.crt.e657745e: {Name:mkb1886eb061717344edc4ffdb683fcea1e6cd97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:43:44.169358    4804 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/apiserver.key.e657745e ...
	I1001 16:43:44.169364    4804 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/apiserver.key.e657745e: {Name:mk04604ea966517678b8030dad156fc95f25ab76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:43:44.169515    4804 certs.go:381] copying /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/apiserver.crt.e657745e -> /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/apiserver.crt
	I1001 16:43:44.169656    4804 certs.go:385] copying /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/apiserver.key.e657745e -> /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/apiserver.key
	I1001 16:43:44.169794    4804 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/proxy-client.key
	I1001 16:43:44.169931    4804 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/1659.pem (1338 bytes)
	W1001 16:43:44.169953    4804 certs.go:480] ignoring /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/1659_empty.pem, impossibly tiny 0 bytes
	I1001 16:43:44.169959    4804 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 16:43:44.169978    4804 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem (1078 bytes)
	I1001 16:43:44.169997    4804 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem (1123 bytes)
	I1001 16:43:44.170014    4804 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/key.pem (1679 bytes)
	I1001 16:43:44.170050    4804 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/ssl/certs/16592.pem (1708 bytes)
	I1001 16:43:44.170406    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 16:43:44.177613    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 16:43:44.184478    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 16:43:44.191918    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1001 16:43:44.199415    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1001 16:43:44.206114    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 16:43:44.212793    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 16:43:44.220104    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 16:43:44.227473    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/ssl/certs/16592.pem --> /usr/share/ca-certificates/16592.pem (1708 bytes)
	I1001 16:43:44.234153    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 16:43:44.240623    4804 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/1659.pem --> /usr/share/ca-certificates/1659.pem (1338 bytes)
	I1001 16:43:44.247829    4804 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 16:43:44.252781    4804 ssh_runner.go:195] Run: openssl version
	I1001 16:43:44.254621    4804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1659.pem && ln -fs /usr/share/ca-certificates/1659.pem /etc/ssl/certs/1659.pem"
	I1001 16:43:44.257538    4804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1659.pem
	I1001 16:43:44.258991    4804 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:04 /usr/share/ca-certificates/1659.pem
	I1001 16:43:44.259020    4804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1659.pem
	I1001 16:43:44.260681    4804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1659.pem /etc/ssl/certs/51391683.0"
	I1001 16:43:44.263769    4804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16592.pem && ln -fs /usr/share/ca-certificates/16592.pem /etc/ssl/certs/16592.pem"
	I1001 16:43:44.266607    4804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16592.pem
	I1001 16:43:44.267941    4804 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:04 /usr/share/ca-certificates/16592.pem
	I1001 16:43:44.267964    4804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16592.pem
	I1001 16:43:44.269747    4804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16592.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 16:43:44.272760    4804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 16:43:44.276142    4804 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 16:43:44.277576    4804 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I1001 16:43:44.277597    4804 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 16:43:44.279494    4804 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 16:43:44.282155    4804 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 16:43:44.283610    4804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 16:43:44.285480    4804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 16:43:44.287386    4804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 16:43:44.289294    4804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 16:43:44.291304    4804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 16:43:44.293170    4804 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 16:43:44.295020    4804 kubeadm.go:392] StartCluster: {Name:running-upgrade-193000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50304 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-193000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 16:43:44.295100    4804 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1001 16:43:44.305150    4804 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 16:43:44.308701    4804 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 16:43:44.308710    4804 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1001 16:43:44.308734    4804 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 16:43:44.311972    4804 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 16:43:44.312204    4804 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-193000" does not appear in /Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:43:44.312250    4804 kubeconfig.go:62] /Users/jenkins/minikube-integration/19740-1141/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-193000" cluster setting kubeconfig missing "running-upgrade-193000" context setting]
	I1001 16:43:44.312369    4804 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/kubeconfig: {Name:mk6821adb20f42e2e1842a7c6bcaf1ce77531dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:43:44.313026    4804 kapi.go:59] client config for running-upgrade-193000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/client.key", CAFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101b2e5d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 16:43:44.313348    4804 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 16:43:44.316235    4804 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-193000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1001 16:43:44.316243    4804 kubeadm.go:1160] stopping kube-system containers ...
	I1001 16:43:44.316296    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1001 16:43:44.327142    4804 docker.go:483] Stopping containers: [4be4656aa4bc 3ff6bb6a30aa beb0eba06e68 2f9e856bdb0a 7f3704770814 c7e4b32a30f5 b3442e418682 14a756901988 94e8647254fc 878e5dcff978 ba2e1cbade5a a5f0f6c6c598 dd58c7e2848c 52fc4c065854]
	I1001 16:43:44.327224    4804 ssh_runner.go:195] Run: docker stop 4be4656aa4bc 3ff6bb6a30aa beb0eba06e68 2f9e856bdb0a 7f3704770814 c7e4b32a30f5 b3442e418682 14a756901988 94e8647254fc 878e5dcff978 ba2e1cbade5a a5f0f6c6c598 dd58c7e2848c 52fc4c065854
	I1001 16:43:44.338722    4804 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 16:43:44.425576    4804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 16:43:44.429720    4804 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Oct  1 23:43 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Oct  1 23:43 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct  1 23:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct  1 23:43 /etc/kubernetes/scheduler.conf
	
	I1001 16:43:44.429762    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/admin.conf
	I1001 16:43:44.433164    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 16:43:44.433199    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 16:43:44.436833    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/kubelet.conf
	I1001 16:43:44.440262    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 16:43:44.440289    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 16:43:44.443595    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/controller-manager.conf
	I1001 16:43:44.446819    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 16:43:44.446855    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 16:43:44.449833    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/scheduler.conf
	I1001 16:43:44.452435    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1001 16:43:44.452463    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 16:43:44.455811    4804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 16:43:44.459198    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:43:44.495728    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:43:44.984130    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:43:45.180148    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:43:45.201091    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:43:45.221611    4804 api_server.go:52] waiting for apiserver process to appear ...
	I1001 16:43:45.221710    4804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:43:45.724081    4804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:43:46.223820    4804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:43:46.724104    4804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:43:47.223743    4804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:43:47.228233    4804 api_server.go:72] duration metric: took 2.006643792s to wait for apiserver process to appear ...
	I1001 16:43:47.228241    4804 api_server.go:88] waiting for apiserver healthz status ...
	I1001 16:43:47.228255    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:43:52.230304    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:43:52.230363    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:43:57.230685    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:43:57.230768    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:44:02.231688    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:44:02.231801    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:44:07.233144    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:44:07.233241    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:44:12.233991    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:44:12.234083    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:44:17.235926    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:44:17.236047    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:44:22.238331    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:44:22.238366    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:44:27.240586    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:44:27.240611    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:44:32.242862    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:44:32.242961    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:44:37.245447    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:44:37.245470    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:44:42.247644    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:44:42.247678    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:44:47.249372    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:44:47.249969    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:44:47.300381    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:44:47.300530    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:44:47.319495    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:44:47.319600    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:44:47.333291    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:44:47.333377    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:44:47.344835    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:44:47.344916    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:44:47.355452    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:44:47.355537    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:44:47.366802    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:44:47.366890    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:44:47.376774    4804 logs.go:282] 0 containers: []
	W1001 16:44:47.376784    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:44:47.376854    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:44:47.388247    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:44:47.388264    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:44:47.388269    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:44:47.403709    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:44:47.403723    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:44:47.417849    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:44:47.417863    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:44:47.445135    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:44:47.445154    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:44:47.482997    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:44:47.483118    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:44:47.483396    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:44:47.483403    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:44:47.487937    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:44:47.487944    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:44:47.583376    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:44:47.583391    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:44:47.599077    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:44:47.599091    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:44:47.619714    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:44:47.619722    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:44:47.634447    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:44:47.634455    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:44:47.651976    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:44:47.651989    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:44:47.663031    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:44:47.663044    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:44:47.674588    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:44:47.674596    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:44:47.686646    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:44:47.686660    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:44:47.704426    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:44:47.704439    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:44:47.719316    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:44:47.719330    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:44:47.731127    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:44:47.731143    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:44:47.742508    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:44:47.742525    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:44:47.742552    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:44:47.742557    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:44:47.742560    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:44:47.742564    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:44:47.742567    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:44:57.746583    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:45:02.747542    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:45:02.747730    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:45:02.759208    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:45:02.759308    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:45:02.770030    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:45:02.770127    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:45:02.784182    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:45:02.784276    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:45:02.794524    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:45:02.794609    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:45:02.805096    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:45:02.805181    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:45:02.815721    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:45:02.815797    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:45:02.825779    4804 logs.go:282] 0 containers: []
	W1001 16:45:02.825794    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:45:02.825863    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:45:02.835979    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:45:02.836000    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:45:02.836005    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:45:02.853888    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:45:02.853898    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:45:02.865385    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:45:02.865394    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:45:02.877523    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:45:02.877534    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:45:02.891399    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:45:02.891412    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:45:02.908885    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:45:02.908894    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:45:02.947799    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:45:02.947891    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:45:02.948176    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:45:02.948182    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:45:02.966976    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:45:02.966986    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:45:02.981873    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:45:02.981881    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:45:02.995686    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:45:02.995696    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:45:03.012832    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:45:03.012846    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:45:03.030027    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:45:03.030048    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:45:03.041331    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:45:03.041341    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:45:03.046164    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:45:03.046173    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:45:03.086961    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:45:03.086971    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:45:03.113057    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:45:03.113065    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:45:03.124362    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:45:03.124376    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:45:03.135561    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:45:03.135572    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:45:03.135597    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:45:03.135604    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:45:03.135609    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:45:03.135612    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:45:03.135615    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:45:13.139144    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:45:18.141711    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:45:18.141869    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:45:18.155990    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:45:18.156089    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:45:18.171708    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:45:18.171802    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:45:18.184138    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:45:18.184216    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:45:18.195315    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:45:18.195393    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:45:18.205733    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:45:18.205810    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:45:18.216578    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:45:18.216648    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:45:18.227387    4804 logs.go:282] 0 containers: []
	W1001 16:45:18.227399    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:45:18.227475    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:45:18.238371    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:45:18.238387    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:45:18.238393    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:45:18.253222    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:45:18.253237    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:45:18.271482    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:45:18.271492    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:45:18.290546    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:45:18.290555    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:45:18.309481    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:45:18.309491    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:45:18.346388    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:45:18.346400    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:45:18.368242    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:45:18.368252    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:45:18.379822    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:45:18.379834    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:45:18.404928    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:45:18.404935    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:45:18.416946    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:45:18.416957    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:45:18.457105    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:45:18.457197    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:45:18.457487    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:45:18.457497    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:45:18.471887    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:45:18.471898    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:45:18.490005    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:45:18.490014    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:45:18.506578    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:45:18.506589    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:45:18.511011    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:45:18.511016    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:45:18.522865    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:45:18.522876    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:45:18.534505    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:45:18.534515    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:45:18.545446    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:45:18.545459    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:45:18.545493    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:45:18.545497    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:45:18.545500    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:45:18.545504    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:45:18.545508    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:45:28.549623    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:45:33.552168    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:45:33.552451    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:45:33.577399    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:45:33.577536    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:45:33.594499    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:45:33.594602    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:45:33.606871    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:45:33.606952    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:45:33.617462    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:45:33.617536    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:45:33.628119    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:45:33.628204    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:45:33.638733    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:45:33.638810    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:45:33.650818    4804 logs.go:282] 0 containers: []
	W1001 16:45:33.650830    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:45:33.650904    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:45:33.661211    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:45:33.661227    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:45:33.661231    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:45:33.679083    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:45:33.679093    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:45:33.694353    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:45:33.694362    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:45:33.705672    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:45:33.705684    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:45:33.719616    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:45:33.719625    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:45:33.756013    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:45:33.756029    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:45:33.767954    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:45:33.767965    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:45:33.778785    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:45:33.778795    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:45:33.790361    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:45:33.790376    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:45:33.830090    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:45:33.830188    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:45:33.830469    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:45:33.830478    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:45:33.852259    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:45:33.852272    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:45:33.865983    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:45:33.865996    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:45:33.880101    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:45:33.880111    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:45:33.900602    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:45:33.900618    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:45:33.905441    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:45:33.905447    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:45:33.917127    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:45:33.917137    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:45:33.942124    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:45:33.942131    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:45:33.953347    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:45:33.953358    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:45:33.953383    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:45:33.953388    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:45:33.953392    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:45:33.953397    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:45:33.953400    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:45:43.957375    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:45:48.958046    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:45:48.958184    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:45:48.969378    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:45:48.969464    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:45:48.980842    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:45:48.980927    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:45:48.991378    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:45:48.991464    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:45:49.001725    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:45:49.001813    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:45:49.011939    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:45:49.012010    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:45:49.022730    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:45:49.022806    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:45:49.036452    4804 logs.go:282] 0 containers: []
	W1001 16:45:49.036464    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:45:49.036537    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:45:49.047690    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:45:49.047706    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:45:49.047713    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:45:49.085486    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:45:49.085586    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:45:49.085871    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:45:49.085880    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:45:49.104754    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:45:49.104765    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:45:49.128353    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:45:49.128368    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:45:49.140057    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:45:49.140068    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:45:49.144941    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:45:49.144949    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:45:49.181508    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:45:49.181520    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:45:49.195949    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:45:49.195962    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:45:49.211277    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:45:49.211290    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:45:49.225626    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:45:49.225642    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:45:49.237197    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:45:49.237210    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:45:49.248949    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:45:49.248962    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:45:49.260181    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:45:49.260190    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:45:49.275044    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:45:49.275060    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:45:49.297278    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:45:49.297289    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:45:49.309076    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:45:49.309090    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:45:49.334200    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:45:49.334209    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:45:49.345660    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:45:49.345670    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:45:49.345699    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:45:49.345703    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:45:49.345707    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:45:49.345710    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:45:49.345713    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:45:59.347823    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:04.350148    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:04.350654    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:46:04.387665    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:46:04.387828    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:46:04.412055    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:46:04.412166    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:46:04.426774    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:46:04.426865    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:46:04.440177    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:46:04.440267    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:46:04.450988    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:46:04.451073    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:46:04.461988    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:46:04.462072    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:46:04.473067    4804 logs.go:282] 0 containers: []
	W1001 16:46:04.473082    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:46:04.473160    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:46:04.484206    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:46:04.484224    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:46:04.484229    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:46:04.502155    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:46:04.502167    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:46:04.526105    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:46:04.526112    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:46:04.562661    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:46:04.562672    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:46:04.577150    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:46:04.577167    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:46:04.601250    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:46:04.601265    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:46:04.640984    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:46:04.641085    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:46:04.641373    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:46:04.641380    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:46:04.661274    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:46:04.661288    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:46:04.677337    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:46:04.677347    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:46:04.689222    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:46:04.689234    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:46:04.701738    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:46:04.701749    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:46:04.714115    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:46:04.714130    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:46:04.718605    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:46:04.718615    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:46:04.731979    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:46:04.731995    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:46:04.743946    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:46:04.743958    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:46:04.755381    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:46:04.755394    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:46:04.769865    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:46:04.769875    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:46:04.785025    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:46:04.785038    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:46:04.785063    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:46:04.785067    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:46:04.785071    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:46:04.785074    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:46:04.785076    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:46:14.789157    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:19.791205    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:19.791681    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:46:19.824374    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:46:19.824517    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:46:19.841372    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:46:19.841480    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:46:19.858382    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:46:19.858454    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:46:19.869188    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:46:19.869271    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:46:19.879816    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:46:19.879906    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:46:19.897951    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:46:19.898037    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:46:19.914539    4804 logs.go:282] 0 containers: []
	W1001 16:46:19.914550    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:46:19.914620    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:46:19.925711    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:46:19.925728    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:46:19.925734    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:46:19.930327    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:46:19.930337    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:46:19.943619    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:46:19.943629    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:46:19.958033    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:46:19.958043    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:46:19.970445    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:46:19.970456    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:46:19.987757    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:46:19.987771    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:46:20.024966    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:46:20.025057    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:46:20.025330    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:46:20.025335    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:46:20.036083    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:46:20.036099    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:46:20.051088    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:46:20.051098    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:46:20.062637    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:46:20.062647    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:46:20.076059    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:46:20.076069    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:46:20.093354    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:46:20.093365    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:46:20.127293    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:46:20.127304    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:46:20.146821    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:46:20.146836    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:46:20.159347    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:46:20.159359    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:46:20.171805    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:46:20.171818    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:46:20.197020    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:46:20.197027    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:46:20.209411    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:46:20.209428    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:46:20.209459    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:46:20.209464    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:46:20.209467    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:46:20.209472    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:46:20.209475    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:46:30.211695    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:35.214323    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:35.214828    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:46:35.250538    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:46:35.250710    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:46:35.271178    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:46:35.271309    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:46:35.290861    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:46:35.290956    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:46:35.302580    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:46:35.302672    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:46:35.313191    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:46:35.313273    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:46:35.324103    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:46:35.324187    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:46:35.334330    4804 logs.go:282] 0 containers: []
	W1001 16:46:35.334341    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:46:35.334412    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:46:35.348245    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:46:35.348263    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:46:35.348269    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:46:35.362701    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:46:35.362712    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:46:35.373591    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:46:35.373602    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:46:35.385123    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:46:35.385133    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:46:35.397069    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:46:35.397084    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:46:35.434340    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:46:35.434449    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:46:35.434738    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:46:35.434743    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:46:35.449135    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:46:35.449145    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:46:35.466202    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:46:35.466213    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:46:35.484769    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:46:35.484781    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:46:35.500248    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:46:35.500258    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:46:35.505015    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:46:35.505022    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:46:35.538924    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:46:35.538936    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:46:35.553515    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:46:35.553527    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:46:35.576966    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:46:35.576973    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:46:35.595913    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:46:35.595923    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:46:35.609084    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:46:35.609095    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:46:35.627517    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:46:35.627527    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:46:35.639544    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:46:35.639560    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:46:35.639592    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:46:35.639597    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:46:35.639602    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:46:35.639605    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:46:35.639609    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:46:45.643713    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:50.646077    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:50.646431    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:46:50.674138    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:46:50.674284    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:46:50.691109    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:46:50.691213    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:46:50.707190    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:46:50.707272    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:46:50.717719    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:46:50.717808    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:46:50.727966    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:46:50.728044    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:46:50.738696    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:46:50.738783    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:46:50.749086    4804 logs.go:282] 0 containers: []
	W1001 16:46:50.749097    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:46:50.749167    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:46:50.759528    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:46:50.759546    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:46:50.759552    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:46:50.773417    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:46:50.773426    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:46:50.789490    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:46:50.789500    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:46:50.810967    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:46:50.810990    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:46:50.830118    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:46:50.830129    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:46:50.849758    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:46:50.849767    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:46:50.861032    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:46:50.861042    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:46:50.872877    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:46:50.872887    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:46:50.884228    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:46:50.884238    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:46:50.902253    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:46:50.902262    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:46:50.914010    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:46:50.914020    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:46:50.953225    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:46:50.953317    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:46:50.953611    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:46:50.953619    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:46:50.958033    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:46:50.958041    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:46:50.994265    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:46:50.994278    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:46:51.017269    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:46:51.017277    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:46:51.031707    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:46:51.031717    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:46:51.047049    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:46:51.047060    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:46:51.058233    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:46:51.058247    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:46:51.058281    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:46:51.058285    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:46:51.058289    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:46:51.058293    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:46:51.058297    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:47:01.062340    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:06.064669    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:06.065020    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:06.090876    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:47:06.091020    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:06.108916    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:47:06.109020    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:06.121937    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:47:06.122031    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:06.137619    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:47:06.137708    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:06.147729    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:47:06.147803    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:06.157810    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:47:06.157884    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:06.168168    4804 logs.go:282] 0 containers: []
	W1001 16:47:06.168179    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:06.168252    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:06.178618    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:47:06.178634    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:47:06.178638    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:47:06.192297    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:47:06.192306    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:47:06.203419    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:47:06.203429    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:47:06.216700    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:06.216711    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:47:06.254303    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:47:06.254394    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:47:06.254668    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:47:06.254672    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:47:06.269178    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:47:06.269189    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:47:06.288228    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:47:06.288241    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:47:06.309412    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:47:06.309424    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:47:06.325524    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:06.325535    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:06.330170    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:47:06.330179    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:47:06.344449    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:47:06.344461    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:47:06.362022    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:47:06.362034    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:47:06.380443    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:47:06.380453    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:06.392488    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:47:06.392504    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:47:06.407001    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:47:06.407011    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:47:06.419356    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:06.419364    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:06.445121    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:06.445132    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:06.481988    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:47:06.482003    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:47:06.482033    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:47:06.482039    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:47:06.482043    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:47:06.482047    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:47:06.482050    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:47:16.486119    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:21.487777    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:21.488016    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:21.500307    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:47:21.500397    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:21.511575    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:47:21.511659    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:21.522376    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:47:21.522462    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:21.533464    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:47:21.533546    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:21.544016    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:47:21.544096    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:21.554559    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:47:21.554632    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:21.564895    4804 logs.go:282] 0 containers: []
	W1001 16:47:21.564906    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:21.564979    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:21.579638    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:47:21.579656    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:47:21.579662    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:47:21.596384    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:47:21.596393    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:47:21.608116    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:47:21.608125    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:47:21.619545    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:47:21.619559    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:47:21.633599    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:47:21.633614    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:47:21.647540    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:47:21.647554    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:47:21.658232    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:47:21.658242    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:47:21.669876    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:47:21.669884    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:47:21.681416    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:47:21.681424    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:21.693046    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:47:21.693057    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:47:21.712430    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:21.712445    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:21.736438    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:21.736445    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:21.741017    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:47:21.741022    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:47:21.762046    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:47:21.762060    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:47:21.779098    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:21.779112    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:47:21.817531    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:47:21.817623    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:47:21.817895    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:21.817899    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:21.851171    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:47:21.851180    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:47:21.865682    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:47:21.865691    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:47:21.865721    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:47:21.865724    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:47:21.865728    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:47:21.865732    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:47:21.865736    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:47:31.869792    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:36.872157    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:36.872647    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:36.911817    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:47:36.911977    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:36.934591    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:47:36.934707    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:36.951099    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:47:36.951186    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:36.963809    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:47:36.963889    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:36.975315    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:47:36.975390    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:36.985951    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:47:36.986026    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:36.999698    4804 logs.go:282] 0 containers: []
	W1001 16:47:36.999709    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:36.999785    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:37.010283    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:47:37.010300    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:37.010306    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:37.045152    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:37.045168    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:37.067891    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:47:37.067899    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:47:37.082588    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:47:37.082598    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:47:37.099870    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:47:37.099879    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:47:37.111289    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:47:37.111304    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:47:37.122798    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:47:37.122809    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:47:37.137347    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:47:37.137358    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:47:37.149418    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:47:37.149430    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:37.160820    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:47:37.160835    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:47:37.172452    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:47:37.172466    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:47:37.189978    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:37.189991    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:47:37.229536    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:47:37.229638    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:47:37.229932    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:37.229940    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:37.234521    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:47:37.234527    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:47:37.254557    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:47:37.254571    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:47:37.268443    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:47:37.268456    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:47:37.283914    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:47:37.283924    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:47:37.295902    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:47:37.295916    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:47:37.295942    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:47:37.295947    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:47:37.296007    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:47:37.296013    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:47:37.296018    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:47:47.298930    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:52.301171    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:52.301333    4804 kubeadm.go:597] duration metric: took 4m7.995156667s to restartPrimaryControlPlane
	W1001 16:47:52.301511    4804 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 16:47:52.301567    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1001 16:47:53.279278    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 16:47:53.284208    4804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 16:47:53.287193    4804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 16:47:53.289753    4804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 16:47:53.289760    4804 kubeadm.go:157] found existing configuration files:
	
	I1001 16:47:53.289783    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/admin.conf
	I1001 16:47:53.292244    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 16:47:53.292268    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 16:47:53.295532    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/kubelet.conf
	I1001 16:47:53.298468    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 16:47:53.298500    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 16:47:53.301033    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/controller-manager.conf
	I1001 16:47:53.303998    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 16:47:53.304019    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 16:47:53.306923    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/scheduler.conf
	I1001 16:47:53.309482    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 16:47:53.309508    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 16:47:53.312268    4804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 16:47:53.330430    4804 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1001 16:47:53.330514    4804 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 16:47:53.379587    4804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 16:47:53.379653    4804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 16:47:53.379697    4804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 16:47:53.429808    4804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 16:47:53.433990    4804 out.go:235]   - Generating certificates and keys ...
	I1001 16:47:53.434025    4804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 16:47:53.434081    4804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 16:47:53.434219    4804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 16:47:53.434254    4804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 16:47:53.434294    4804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 16:47:53.434325    4804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 16:47:53.434360    4804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 16:47:53.434396    4804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 16:47:53.434501    4804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 16:47:53.434611    4804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 16:47:53.434711    4804 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 16:47:53.434747    4804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 16:47:53.543588    4804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 16:47:53.662582    4804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 16:47:53.766632    4804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 16:47:53.814676    4804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 16:47:53.843001    4804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 16:47:53.843332    4804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 16:47:53.843353    4804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 16:47:53.930318    4804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 16:47:53.934524    4804 out.go:235]   - Booting up control plane ...
	I1001 16:47:53.934569    4804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 16:47:53.934633    4804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 16:47:53.934680    4804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 16:47:53.934743    4804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 16:47:53.936799    4804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 16:47:58.940449    4804 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.003640 seconds
	I1001 16:47:58.940535    4804 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 16:47:58.945486    4804 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 16:47:59.454119    4804 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 16:47:59.454267    4804 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-193000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 16:47:59.960070    4804 kubeadm.go:310] [bootstrap-token] Using token: eg9n22.z0ark4bzn3ubtph2
	I1001 16:47:59.964649    4804 out.go:235]   - Configuring RBAC rules ...
	I1001 16:47:59.964723    4804 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 16:47:59.964767    4804 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 16:47:59.966818    4804 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 16:47:59.971386    4804 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 16:47:59.972250    4804 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 16:47:59.973127    4804 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 16:47:59.976695    4804 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 16:48:00.145577    4804 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 16:48:00.364088    4804 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 16:48:00.364567    4804 kubeadm.go:310] 
	I1001 16:48:00.364605    4804 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 16:48:00.364608    4804 kubeadm.go:310] 
	I1001 16:48:00.364646    4804 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 16:48:00.364648    4804 kubeadm.go:310] 
	I1001 16:48:00.364661    4804 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 16:48:00.364716    4804 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 16:48:00.364748    4804 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 16:48:00.364787    4804 kubeadm.go:310] 
	I1001 16:48:00.364893    4804 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 16:48:00.364897    4804 kubeadm.go:310] 
	I1001 16:48:00.364960    4804 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 16:48:00.364964    4804 kubeadm.go:310] 
	I1001 16:48:00.365032    4804 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 16:48:00.365072    4804 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 16:48:00.365159    4804 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 16:48:00.365178    4804 kubeadm.go:310] 
	I1001 16:48:00.365276    4804 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 16:48:00.365313    4804 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 16:48:00.365316    4804 kubeadm.go:310] 
	I1001 16:48:00.365367    4804 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token eg9n22.z0ark4bzn3ubtph2 \
	I1001 16:48:00.365430    4804 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7410ba584d1420d22d17a85d1568f395de246b7fddabe3e224321915d0b92005 \
	I1001 16:48:00.365443    4804 kubeadm.go:310] 	--control-plane 
	I1001 16:48:00.365445    4804 kubeadm.go:310] 
	I1001 16:48:00.365486    4804 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 16:48:00.365491    4804 kubeadm.go:310] 
	I1001 16:48:00.365533    4804 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token eg9n22.z0ark4bzn3ubtph2 \
	I1001 16:48:00.365597    4804 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7410ba584d1420d22d17a85d1568f395de246b7fddabe3e224321915d0b92005 
	I1001 16:48:00.365670    4804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 16:48:00.365679    4804 cni.go:84] Creating CNI manager for ""
	I1001 16:48:00.365686    4804 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:48:00.373343    4804 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 16:48:00.376241    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 16:48:00.379391    4804 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 16:48:00.384284    4804 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 16:48:00.384331    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 16:48:00.384343    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-193000 minikube.k8s.io/updated_at=2024_10_01T16_48_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=running-upgrade-193000 minikube.k8s.io/primary=true
	I1001 16:48:00.426093    4804 kubeadm.go:1113] duration metric: took 41.799667ms to wait for elevateKubeSystemPrivileges
	I1001 16:48:00.426110    4804 ops.go:34] apiserver oom_adj: -16
	I1001 16:48:00.426114    4804 kubeadm.go:394] duration metric: took 4m16.133723667s to StartCluster
	I1001 16:48:00.426123    4804 settings.go:142] acquiring lock: {Name:mkd0df72d236cca9ab7a62ebb6aa022c207aaa93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:48:00.426212    4804 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:48:00.426581    4804 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/kubeconfig: {Name:mk6821adb20f42e2e1842a7c6bcaf1ce77531dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:48:00.426779    4804 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:48:00.426828    4804 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 16:48:00.426861    4804 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-193000"
	I1001 16:48:00.426869    4804 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-193000"
	W1001 16:48:00.426873    4804 addons.go:243] addon storage-provisioner should already be in state true
	I1001 16:48:00.426884    4804 host.go:66] Checking if "running-upgrade-193000" exists ...
	I1001 16:48:00.426883    4804 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-193000"
	I1001 16:48:00.426898    4804 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-193000"
	I1001 16:48:00.427197    4804 config.go:182] Loaded profile config "running-upgrade-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 16:48:00.427929    4804 kapi.go:59] client config for running-upgrade-193000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/client.key", CAFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101b2e5d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 16:48:00.428048    4804 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-193000"
	W1001 16:48:00.428054    4804 addons.go:243] addon default-storageclass should already be in state true
	I1001 16:48:00.428060    4804 host.go:66] Checking if "running-upgrade-193000" exists ...
	I1001 16:48:00.431217    4804 out.go:177] * Verifying Kubernetes components...
	I1001 16:48:00.431653    4804 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 16:48:00.435452    4804 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 16:48:00.435460    4804 sshutil.go:53] new ssh client: &{IP:localhost Port:50233 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/running-upgrade-193000/id_rsa Username:docker}
	I1001 16:48:00.439246    4804 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:48:00.442324    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:48:00.449348    4804 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 16:48:00.449355    4804 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 16:48:00.449362    4804 sshutil.go:53] new ssh client: &{IP:localhost Port:50233 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/running-upgrade-193000/id_rsa Username:docker}
	I1001 16:48:00.538799    4804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 16:48:00.543802    4804 api_server.go:52] waiting for apiserver process to appear ...
	I1001 16:48:00.543854    4804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:48:00.548305    4804 api_server.go:72] duration metric: took 121.516542ms to wait for apiserver process to appear ...
	I1001 16:48:00.548312    4804 api_server.go:88] waiting for apiserver healthz status ...
	I1001 16:48:00.548320    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:00.555061    4804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 16:48:00.636940    4804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 16:48:00.901539    4804 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 16:48:00.901555    4804 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 16:48:05.550355    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:05.550394    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:10.550714    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:10.550758    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:15.551171    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:15.551212    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:20.551692    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:20.551749    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:25.552476    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:25.552516    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:30.553316    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:30.553350    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1001 16:48:30.903924    4804 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1001 16:48:30.907524    4804 out.go:177] * Enabled addons: storage-provisioner
	I1001 16:48:30.919315    4804 addons.go:510] duration metric: took 30.49280275s for enable addons: enabled=[storage-provisioner]
	I1001 16:48:35.554414    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:35.554452    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:40.555771    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:40.555813    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:45.557774    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:45.557814    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:50.560015    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:50.560032    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:55.562177    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:55.562224    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:00.563164    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:00.563347    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:00.581874    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:49:00.581953    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:00.592169    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:49:00.592253    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:00.609812    4804 logs.go:282] 2 containers: [4e2b1026af64 52703530d033]
	I1001 16:49:00.609894    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:00.623840    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:49:00.623916    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:00.639064    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:49:00.639148    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:00.649648    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:49:00.649718    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:00.659541    4804 logs.go:282] 0 containers: []
	W1001 16:49:00.659552    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:00.659622    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:00.669815    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:49:00.669829    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:00.669834    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:00.706101    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:49:00.706116    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:49:00.719834    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:49:00.719847    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:49:00.731612    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:49:00.731625    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:49:00.742925    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:49:00.742937    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:49:00.764877    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:49:00.764886    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:49:00.775632    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:49:00.775643    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:00.787059    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:00.787069    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:49:00.804538    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:49:00.804630    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:49:00.820810    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:00.820816    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:00.825398    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:49:00.825408    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:49:00.839010    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:49:00.839019    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:49:00.850317    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:49:00.850328    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:49:00.868319    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:00.868328    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:00.893462    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:49:00.893475    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:49:00.893500    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:49:00.893505    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:49:00.893508    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:49:00.893511    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:49:00.893514    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:49:10.897189    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:15.899926    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:15.900375    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:15.932052    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:49:15.932214    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:15.951142    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:49:15.951242    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:15.965370    4804 logs.go:282] 2 containers: [4e2b1026af64 52703530d033]
	I1001 16:49:15.965462    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:15.981719    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:49:15.981801    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:15.992821    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:49:15.992910    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:16.003676    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:49:16.003761    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:16.014404    4804 logs.go:282] 0 containers: []
	W1001 16:49:16.014416    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:16.014489    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:16.025040    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:49:16.025054    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:16.025060    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:49:16.041751    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:49:16.041843    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:49:16.058436    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:16.058441    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:16.062993    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:16.063002    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:16.104754    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:49:16.104765    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:49:16.116737    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:49:16.116747    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:49:16.137189    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:49:16.137199    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:49:16.152347    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:49:16.152356    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:49:16.166147    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:49:16.166155    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:49:16.177741    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:49:16.177752    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:49:16.188951    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:49:16.188961    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:49:16.206144    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:49:16.206154    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:49:16.217739    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:16.217755    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:16.241366    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:49:16.241376    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:16.256181    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:49:16.256190    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:49:16.256215    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:49:16.256220    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:49:16.256223    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:49:16.256227    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:49:16.256231    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:49:26.260270    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:31.262593    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:31.262835    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:31.297698    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:49:31.297843    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:31.314223    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:49:31.314317    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:31.327049    4804 logs.go:282] 2 containers: [4e2b1026af64 52703530d033]
	I1001 16:49:31.327139    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:31.339423    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:49:31.339503    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:31.350592    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:49:31.350668    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:31.361444    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:49:31.361531    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:31.372168    4804 logs.go:282] 0 containers: []
	W1001 16:49:31.372179    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:31.372247    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:31.382439    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:49:31.382453    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:31.382459    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:31.387431    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:49:31.387438    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:49:31.402494    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:49:31.402505    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:49:31.416714    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:49:31.416725    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:49:31.436011    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:49:31.436025    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:49:31.448595    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:31.448606    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:31.473190    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:49:31.473198    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:31.484415    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:31.484426    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:49:31.502825    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:49:31.502921    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:49:31.519547    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:31.519553    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:31.591812    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:49:31.591827    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:49:31.607064    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:49:31.607074    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:49:31.621233    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:49:31.621248    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:49:31.632520    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:49:31.632535    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:49:31.644066    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:49:31.644080    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:49:31.644106    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:49:31.644111    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:49:31.644116    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:49:31.644120    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:49:31.644123    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:49:41.648186    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:46.650468    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:46.650580    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:46.662018    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:49:46.662098    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:46.674159    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:49:46.674243    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:46.689542    4804 logs.go:282] 2 containers: [4e2b1026af64 52703530d033]
	I1001 16:49:46.689630    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:46.701563    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:49:46.701646    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:46.713193    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:49:46.713280    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:46.724719    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:49:46.724802    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:46.735750    4804 logs.go:282] 0 containers: []
	W1001 16:49:46.735765    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:46.735846    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:46.747964    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:49:46.747985    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:49:46.747992    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:49:46.764535    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:49:46.764553    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:49:46.776948    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:49:46.776960    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:49:46.789159    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:46.789170    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:46.815217    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:49:46.815232    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:46.827759    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:46.827772    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:49:46.847039    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:49:46.847135    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:49:46.864249    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:46.864257    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:46.904147    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:49:46.904159    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:49:46.919851    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:49:46.919866    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:49:46.936176    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:49:46.936191    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:49:46.948706    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:49:46.948718    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:49:46.966831    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:49:46.966848    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:49:46.979342    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:46.979354    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:46.985082    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:49:46.985093    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:49:46.985119    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:49:46.985125    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:49:46.985129    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:49:46.985133    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:49:46.985135    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:49:56.989163    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:01.991423    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:01.991541    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:50:02.002981    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:50:02.003067    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:50:02.013114    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:50:02.013196    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:50:02.023560    4804 logs.go:282] 2 containers: [4e2b1026af64 52703530d033]
	I1001 16:50:02.023640    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:50:02.033826    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:50:02.033908    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:50:02.049816    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:50:02.049905    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:50:02.060113    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:50:02.060195    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:50:02.070834    4804 logs.go:282] 0 containers: []
	W1001 16:50:02.070845    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:50:02.070921    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:50:02.082622    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:50:02.082636    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:50:02.082642    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:50:02.096777    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:50:02.096787    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:50:02.113383    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:50:02.113394    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:50:02.125335    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:50:02.125346    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:50:02.160457    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:50:02.160467    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:50:02.165065    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:50:02.165074    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:50:02.179531    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:50:02.179542    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:50:02.195154    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:50:02.195163    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:50:02.207129    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:50:02.207138    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:50:02.225160    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:50:02.225172    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:50:02.237203    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:50:02.237213    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:50:02.260461    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:50:02.260470    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:50:02.277697    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:50:02.277792    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:50:02.294428    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:50:02.294435    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:50:02.306921    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:50:02.306931    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:50:02.306958    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:50:02.306963    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:50:02.306966    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:50:02.306969    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:50:02.306972    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:50:12.311007    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:17.313299    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:17.313811    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:50:17.351818    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:50:17.351986    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:50:17.372932    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:50:17.373058    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:50:17.388587    4804 logs.go:282] 4 containers: [2a9fdf492bbf 50b4f2e786a4 4e2b1026af64 52703530d033]
	I1001 16:50:17.388686    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:50:17.400357    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:50:17.400436    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:50:17.411276    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:50:17.411350    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:50:17.426127    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:50:17.426225    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:50:17.436586    4804 logs.go:282] 0 containers: []
	W1001 16:50:17.436598    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:50:17.436673    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:50:17.447829    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:50:17.447846    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:50:17.447852    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:50:17.463361    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:50:17.463375    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:50:17.475895    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:50:17.475905    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:50:17.487639    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:50:17.487651    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:50:17.504124    4804 logs.go:123] Gathering logs for coredns [50b4f2e786a4] ...
	I1001 16:50:17.504146    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50b4f2e786a4"
	I1001 16:50:17.517438    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:50:17.517450    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:50:17.530595    4804 logs.go:123] Gathering logs for coredns [2a9fdf492bbf] ...
	I1001 16:50:17.530606    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a9fdf492bbf"
	I1001 16:50:17.541964    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:50:17.541976    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:50:17.547220    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:50:17.547228    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:50:17.583152    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:50:17.583163    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:50:17.598286    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:50:17.598297    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:50:17.615102    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:50:17.615198    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:50:17.631900    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:50:17.631906    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:50:17.649295    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:50:17.649304    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:50:17.660792    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:50:17.660805    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:50:17.672697    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:50:17.672711    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:50:17.697425    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:50:17.697432    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:50:17.697456    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:50:17.697460    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:50:17.697463    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:50:17.697481    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:50:17.697484    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:50:27.701480    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:32.703812    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:32.704338    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:50:32.738950    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:50:32.739112    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:50:32.764008    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:50:32.764115    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:50:32.777895    4804 logs.go:282] 4 containers: [2a9fdf492bbf 50b4f2e786a4 4e2b1026af64 52703530d033]
	I1001 16:50:32.777996    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:50:32.790414    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:50:32.790502    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:50:32.801841    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:50:32.801925    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:50:32.813046    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:50:32.813132    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:50:32.823248    4804 logs.go:282] 0 containers: []
	W1001 16:50:32.823266    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:50:32.823335    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:50:32.834156    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:50:32.834175    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:50:32.834180    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:50:32.853166    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:50:32.853257    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:50:32.869420    4804 logs.go:123] Gathering logs for coredns [2a9fdf492bbf] ...
	I1001 16:50:32.869429    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a9fdf492bbf"
	I1001 16:50:32.880840    4804 logs.go:123] Gathering logs for coredns [50b4f2e786a4] ...
	I1001 16:50:32.880849    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50b4f2e786a4"
	I1001 16:50:32.892618    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:50:32.892628    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:50:32.904211    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:50:32.904220    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:50:32.918582    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:50:32.918591    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:50:32.930277    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:50:32.930287    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:50:32.947687    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:50:32.947697    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:50:32.959756    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:50:32.959769    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:50:32.974376    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:50:32.974386    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:50:32.985849    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:50:32.985860    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:50:33.001160    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:50:33.001169    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:50:33.005824    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:50:33.005831    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:50:33.042267    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:50:33.042281    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:50:33.068362    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:50:33.068374    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:50:33.093781    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:50:33.093790    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:50:33.093814    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:50:33.093818    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:50:33.093831    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:50:33.093835    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:50:33.093840    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:50:43.097320    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:48.099629    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:48.099860    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:50:48.114717    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:50:48.114817    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:50:48.126197    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:50:48.126285    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:50:48.137811    4804 logs.go:282] 4 containers: [2a9fdf492bbf 50b4f2e786a4 4e2b1026af64 52703530d033]
	I1001 16:50:48.137899    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:50:48.153284    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:50:48.153367    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:50:48.164088    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:50:48.164168    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:50:48.174621    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:50:48.174692    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:50:48.185704    4804 logs.go:282] 0 containers: []
	W1001 16:50:48.185717    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:50:48.185787    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:50:48.196180    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:50:48.196195    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:50:48.196200    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:50:48.207629    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:50:48.207639    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:50:48.219540    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:50:48.219550    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:50:48.232462    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:50:48.232474    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:50:48.250059    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:50:48.250069    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:50:48.264975    4804 logs.go:123] Gathering logs for coredns [2a9fdf492bbf] ...
	I1001 16:50:48.264986    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a9fdf492bbf"
	I1001 16:50:48.295856    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:50:48.295865    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:50:48.322950    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:50:48.322961    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:50:48.327636    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:50:48.327645    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:50:48.342392    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:50:48.342402    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:50:48.356328    4804 logs.go:123] Gathering logs for coredns [50b4f2e786a4] ...
	I1001 16:50:48.356338    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50b4f2e786a4"
	I1001 16:50:48.367665    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:50:48.367675    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:50:48.380558    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:50:48.380571    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:50:48.391870    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:50:48.391881    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:50:48.408668    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:50:48.408759    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:50:48.425658    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:50:48.425673    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:50:48.461883    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:50:48.461893    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:50:48.461923    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:50:48.461928    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:50:48.461934    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:50:48.461938    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:50:48.461941    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:50:58.465967    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:03.468181    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:03.468310    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:03.483154    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:51:03.483246    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:03.494880    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:51:03.494967    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:03.505505    4804 logs.go:282] 4 containers: [2a9fdf492bbf 50b4f2e786a4 4e2b1026af64 52703530d033]
	I1001 16:51:03.505592    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:03.515929    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:51:03.516012    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:03.527668    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:51:03.527739    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:03.538705    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:51:03.538779    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:03.549311    4804 logs.go:282] 0 containers: []
	W1001 16:51:03.549323    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:03.549388    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:03.559793    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:51:03.559809    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:51:03.559814    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:51:03.574308    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:51:03.574319    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:51:03.585814    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:51:03.585824    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:51:03.606365    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:51:03.606374    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:03.618468    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:03.618479    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:51:03.637633    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:51:03.637725    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:51:03.654071    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:51:03.654079    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:51:03.665651    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:51:03.665661    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:51:03.683093    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:51:03.683104    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:51:03.694395    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:03.694413    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:03.729771    4804 logs.go:123] Gathering logs for coredns [2a9fdf492bbf] ...
	I1001 16:51:03.729782    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a9fdf492bbf"
	I1001 16:51:03.745381    4804 logs.go:123] Gathering logs for coredns [50b4f2e786a4] ...
	I1001 16:51:03.745391    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50b4f2e786a4"
	I1001 16:51:03.757214    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:03.757229    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:03.762072    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:51:03.762079    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:51:03.776549    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:51:03.776559    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:51:03.788620    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:03.788633    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:03.813295    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:51:03.813303    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:51:03.813332    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:51:03.813337    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:51:03.813355    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:51:03.813362    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:51:03.813366    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:51:13.817220    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:18.819446    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:18.819616    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:18.834491    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:51:18.834590    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:18.847948    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:51:18.848035    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:18.858854    4804 logs.go:282] 4 containers: [2a9fdf492bbf 50b4f2e786a4 4e2b1026af64 52703530d033]
	I1001 16:51:18.858945    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:18.870061    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:51:18.870141    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:18.881100    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:51:18.881184    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:18.891788    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:51:18.891872    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:18.902456    4804 logs.go:282] 0 containers: []
	W1001 16:51:18.902469    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:18.902535    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:18.913334    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:51:18.913350    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:51:18.913356    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:51:18.928814    4804 logs.go:123] Gathering logs for coredns [2a9fdf492bbf] ...
	I1001 16:51:18.928826    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a9fdf492bbf"
	I1001 16:51:18.940940    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:51:18.940953    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:51:18.953456    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:51:18.953470    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:51:18.975234    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:18.975245    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:51:18.994350    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:51:18.994441    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:51:19.010328    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:51:19.010333    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:51:19.024024    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:51:19.024033    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:51:19.036557    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:51:19.036569    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:51:19.048367    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:51:19.048377    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:51:19.059959    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:19.059971    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:19.095541    4804 logs.go:123] Gathering logs for coredns [50b4f2e786a4] ...
	I1001 16:51:19.095554    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50b4f2e786a4"
	I1001 16:51:19.107223    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:19.107233    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:19.132064    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:51:19.132075    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:19.143977    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:19.143987    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:19.148814    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:51:19.148822    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:51:19.163387    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:51:19.163400    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:51:19.163424    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:51:19.163428    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:51:19.163431    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:51:19.163435    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:51:19.163438    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:51:29.167449    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:34.169737    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:34.169984    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:34.189172    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:51:34.189286    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:34.203313    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:51:34.203404    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:34.215560    4804 logs.go:282] 4 containers: [2a9fdf492bbf 50b4f2e786a4 4e2b1026af64 52703530d033]
	I1001 16:51:34.215647    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:34.228501    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:51:34.228590    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:34.239534    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:51:34.239616    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:34.250303    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:51:34.250383    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:34.260260    4804 logs.go:282] 0 containers: []
	W1001 16:51:34.260273    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:34.260337    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:34.272333    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:51:34.272353    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:51:34.272358    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:51:34.286632    4804 logs.go:123] Gathering logs for coredns [2a9fdf492bbf] ...
	I1001 16:51:34.286642    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a9fdf492bbf"
	I1001 16:51:34.298086    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:51:34.298098    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:51:34.309384    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:51:34.309397    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:51:34.327055    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:51:34.327069    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:34.339242    4804 logs.go:123] Gathering logs for coredns [50b4f2e786a4] ...
	I1001 16:51:34.339255    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50b4f2e786a4"
	I1001 16:51:34.351018    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:34.351031    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:34.355745    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:34.355754    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:34.391357    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:51:34.391368    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:51:34.408852    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:51:34.408866    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:51:34.423374    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:51:34.423385    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:51:34.437625    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:34.437634    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:34.462455    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:34.462461    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:51:34.480026    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:51:34.480116    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:51:34.496190    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:51:34.496194    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:51:34.510668    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:51:34.510680    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:51:34.525276    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:51:34.525286    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:51:34.525314    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:51:34.525319    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:51:34.525332    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:51:34.525337    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:51:34.525346    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:51:44.528869    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:49.531145    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:49.531348    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:49.548595    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:51:49.548707    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:49.561532    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:51:49.561615    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:49.577658    4804 logs.go:282] 4 containers: [2a9fdf492bbf 50b4f2e786a4 4e2b1026af64 52703530d033]
	I1001 16:51:49.577735    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:49.587822    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:51:49.587891    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:49.598563    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:51:49.598636    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:49.609757    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:51:49.609840    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:49.625445    4804 logs.go:282] 0 containers: []
	W1001 16:51:49.625458    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:49.625529    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:49.636550    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:51:49.636567    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:49.636574    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:49.641520    4804 logs.go:123] Gathering logs for coredns [2a9fdf492bbf] ...
	I1001 16:51:49.641530    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a9fdf492bbf"
	I1001 16:51:49.653379    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:51:49.653390    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:51:49.666391    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:49.666401    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:49.725774    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:51:49.725784    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:51:49.740710    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:51:49.740720    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:49.753032    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:49.753042    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:51:49.769547    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:51:49.769638    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:51:49.785870    4804 logs.go:123] Gathering logs for coredns [50b4f2e786a4] ...
	I1001 16:51:49.785874    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50b4f2e786a4"
	I1001 16:51:49.797224    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:51:49.797236    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:51:49.809190    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:51:49.809201    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:51:49.826573    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:51:49.826585    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:51:49.840652    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:51:49.840664    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:51:49.855311    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:51:49.855320    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:51:49.867354    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:51:49.867364    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:51:49.878716    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:49.878726    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:49.902868    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:51:49.902878    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:51:49.902901    4804 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 16:51:49.902904    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:51:49.902908    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	  Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:51:49.902912    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:51:49.902915    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:51:59.902804    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:52:04.900110    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:52:04.905665    4804 out.go:201] 
	W1001 16:52:04.908636    4804 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1001 16:52:04.908649    4804 out.go:270] * 
	* 
	W1001 16:52:04.909662    4804 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:52:04.918561    4804 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-193000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-10-01 16:52:04.999461 -0700 PDT m=+3917.905666334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-193000 -n running-upgrade-193000
E1001 16:52:14.581253    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-193000 -n running-upgrade-193000: exit status 2 (15.661964208s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-193000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | force-systemd-env-845000              | force-systemd-env-845000  | jenkins | v1.34.0 | 01 Oct 24 16:40 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-845000           | force-systemd-env-845000  | jenkins | v1.34.0 | 01 Oct 24 16:40 PDT | 01 Oct 24 16:40 PDT |
	| start   | -p docker-flags-434000                | docker-flags-434000       | jenkins | v1.34.0 | 01 Oct 24 16:40 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-173000             | force-systemd-flag-173000 | jenkins | v1.34.0 | 01 Oct 24 16:40 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-173000          | force-systemd-flag-173000 | jenkins | v1.34.0 | 01 Oct 24 16:40 PDT | 01 Oct 24 16:40 PDT |
	| start   | -p cert-expiration-161000             | cert-expiration-161000    | jenkins | v1.34.0 | 01 Oct 24 16:40 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-434000 ssh               | docker-flags-434000       | jenkins | v1.34.0 | 01 Oct 24 16:40 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-434000 ssh               | docker-flags-434000       | jenkins | v1.34.0 | 01 Oct 24 16:40 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-434000                | docker-flags-434000       | jenkins | v1.34.0 | 01 Oct 24 16:40 PDT | 01 Oct 24 16:40 PDT |
	| start   | -p cert-options-774000                | cert-options-774000       | jenkins | v1.34.0 | 01 Oct 24 16:40 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-774000 ssh               | cert-options-774000       | jenkins | v1.34.0 | 01 Oct 24 16:40 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-774000 -- sudo        | cert-options-774000       | jenkins | v1.34.0 | 01 Oct 24 16:40 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-774000                | cert-options-774000       | jenkins | v1.34.0 | 01 Oct 24 16:40 PDT | 01 Oct 24 16:40 PDT |
	| start   | -p running-upgrade-193000             | minikube                  | jenkins | v1.26.0 | 01 Oct 24 16:40 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-193000             | minikube                  | jenkins | v1.26.0 | 01 Oct 24 16:42 PDT | 01 Oct 24 16:43 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-193000             | running-upgrade-193000    | jenkins | v1.34.0 | 01 Oct 24 16:43 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-161000             | cert-expiration-161000    | jenkins | v1.34.0 | 01 Oct 24 16:43 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-161000             | cert-expiration-161000    | jenkins | v1.34.0 | 01 Oct 24 16:43 PDT | 01 Oct 24 16:43 PDT |
	| start   | -p kubernetes-upgrade-407000          | kubernetes-upgrade-407000 | jenkins | v1.34.0 | 01 Oct 24 16:43 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-407000          | kubernetes-upgrade-407000 | jenkins | v1.34.0 | 01 Oct 24 16:44 PDT | 01 Oct 24 16:44 PDT |
	| start   | -p kubernetes-upgrade-407000          | kubernetes-upgrade-407000 | jenkins | v1.34.0 | 01 Oct 24 16:44 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-407000          | kubernetes-upgrade-407000 | jenkins | v1.34.0 | 01 Oct 24 16:44 PDT | 01 Oct 24 16:44 PDT |
	| start   | -p stopped-upgrade-342000             | minikube                  | jenkins | v1.26.0 | 01 Oct 24 16:44 PDT | 01 Oct 24 16:44 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-342000 stop           | minikube                  | jenkins | v1.26.0 | 01 Oct 24 16:44 PDT | 01 Oct 24 16:45 PDT |
	| start   | -p stopped-upgrade-342000             | stopped-upgrade-342000    | jenkins | v1.34.0 | 01 Oct 24 16:45 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 16:45:11
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 16:45:11.870838    4927 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:45:11.871035    4927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:45:11.871039    4927 out.go:358] Setting ErrFile to fd 2...
	I1001 16:45:11.871046    4927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:45:11.871197    4927 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:45:11.872422    4927 out.go:352] Setting JSON to false
	I1001 16:45:11.892221    4927 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4479,"bootTime":1727821832,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:45:11.892290    4927 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:45:11.897388    4927 out.go:177] * [stopped-upgrade-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:45:11.904309    4927 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:45:11.904358    4927 notify.go:220] Checking for updates...
	I1001 16:45:11.912362    4927 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:45:11.916354    4927 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:45:11.919380    4927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:45:11.922348    4927 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:45:11.925331    4927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:45:11.928584    4927 config.go:182] Loaded profile config "stopped-upgrade-342000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 16:45:11.931382    4927 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1001 16:45:11.934338    4927 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:45:11.938323    4927 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 16:45:11.945251    4927 start.go:297] selected driver: qemu2
	I1001 16:45:11.945256    4927 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50522 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 16:45:11.945307    4927 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:45:11.947654    4927 cni.go:84] Creating CNI manager for ""
	I1001 16:45:11.947692    4927 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:45:11.947724    4927 start.go:340] cluster config:
	{Name:stopped-upgrade-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50522 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 16:45:11.947785    4927 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:45:11.956186    4927 out.go:177] * Starting "stopped-upgrade-342000" primary control-plane node in "stopped-upgrade-342000" cluster
	I1001 16:45:11.960296    4927 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1001 16:45:11.960309    4927 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1001 16:45:11.960314    4927 cache.go:56] Caching tarball of preloaded images
	I1001 16:45:11.960360    4927 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:45:11.960365    4927 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1001 16:45:11.960408    4927 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/config.json ...
	I1001 16:45:11.960813    4927 start.go:360] acquireMachinesLock for stopped-upgrade-342000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:45:11.960844    4927 start.go:364] duration metric: took 25.334µs to acquireMachinesLock for "stopped-upgrade-342000"
	I1001 16:45:11.960851    4927 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:45:11.960856    4927 fix.go:54] fixHost starting: 
	I1001 16:45:11.960976    4927 fix.go:112] recreateIfNeeded on stopped-upgrade-342000: state=Stopped err=<nil>
	W1001 16:45:11.960984    4927 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:45:11.968328    4927 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-342000" ...
	I1001 16:45:11.972340    4927 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:45:11.972400    4927 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50486-:22,hostfwd=tcp::50487-:2376,hostname=stopped-upgrade-342000 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/disk.qcow2
	I1001 16:45:12.017358    4927 main.go:141] libmachine: STDOUT: 
	I1001 16:45:12.017392    4927 main.go:141] libmachine: STDERR: 
	I1001 16:45:12.017399    4927 main.go:141] libmachine: Waiting for VM to start (ssh -p 50486 docker@127.0.0.1)...
	I1001 16:45:13.139144    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:45:18.141711    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:45:18.141869    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:45:18.155990    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:45:18.156089    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:45:18.171708    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:45:18.171802    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:45:18.184138    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:45:18.184216    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:45:18.195315    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:45:18.195393    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:45:18.205733    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:45:18.205810    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:45:18.216578    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:45:18.216648    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:45:18.227387    4804 logs.go:282] 0 containers: []
	W1001 16:45:18.227399    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:45:18.227475    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:45:18.238371    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:45:18.238387    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:45:18.238393    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:45:18.253222    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:45:18.253237    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:45:18.271482    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:45:18.271492    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:45:18.290546    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:45:18.290555    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:45:18.309481    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:45:18.309491    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:45:18.346388    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:45:18.346400    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:45:18.368242    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:45:18.368252    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:45:18.379822    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:45:18.379834    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:45:18.404928    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:45:18.404935    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:45:18.416946    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:45:18.416957    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:45:18.457105    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:45:18.457197    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:45:18.457487    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:45:18.457497    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:45:18.471887    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:45:18.471898    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:45:18.490005    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:45:18.490014    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:45:18.506578    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:45:18.506589    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:45:18.511011    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:45:18.511016    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:45:18.522865    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:45:18.522876    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:45:18.534505    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:45:18.534515    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:45:18.545446    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:45:18.545459    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:45:18.545493    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:45:18.545497    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:45:18.545500    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:45:18.545504    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:45:18.545508    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:45:31.678531    4927 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/config.json ...
	I1001 16:45:31.679231    4927 machine.go:93] provisionDockerMachine start ...
	I1001 16:45:31.679433    4927 main.go:141] libmachine: Using SSH client type: native
	I1001 16:45:31.679890    4927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f65c00] 0x102f68440 <nil>  [] 0s} localhost 50486 <nil> <nil>}
	I1001 16:45:31.679904    4927 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 16:45:31.757789    4927 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1001 16:45:31.757829    4927 buildroot.go:166] provisioning hostname "stopped-upgrade-342000"
	I1001 16:45:31.757997    4927 main.go:141] libmachine: Using SSH client type: native
	I1001 16:45:31.758265    4927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f65c00] 0x102f68440 <nil>  [] 0s} localhost 50486 <nil> <nil>}
	I1001 16:45:31.758281    4927 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-342000 && echo "stopped-upgrade-342000" | sudo tee /etc/hostname
	I1001 16:45:31.826489    4927 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-342000
	
	I1001 16:45:31.826576    4927 main.go:141] libmachine: Using SSH client type: native
	I1001 16:45:31.826770    4927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f65c00] 0x102f68440 <nil>  [] 0s} localhost 50486 <nil> <nil>}
	I1001 16:45:31.826783    4927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-342000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-342000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-342000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 16:45:28.549623    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:45:31.886892    4927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 16:45:31.886904    4927 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19740-1141/.minikube CaCertPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19740-1141/.minikube}
	I1001 16:45:31.886920    4927 buildroot.go:174] setting up certificates
	I1001 16:45:31.886925    4927 provision.go:84] configureAuth start
	I1001 16:45:31.886932    4927 provision.go:143] copyHostCerts
	I1001 16:45:31.887023    4927 exec_runner.go:144] found /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.pem, removing ...
	I1001 16:45:31.887034    4927 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.pem
	I1001 16:45:31.887296    4927 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.pem (1078 bytes)
	I1001 16:45:31.887478    4927 exec_runner.go:144] found /Users/jenkins/minikube-integration/19740-1141/.minikube/cert.pem, removing ...
	I1001 16:45:31.887482    4927 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19740-1141/.minikube/cert.pem
	I1001 16:45:31.887539    4927 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19740-1141/.minikube/cert.pem (1123 bytes)
	I1001 16:45:31.887657    4927 exec_runner.go:144] found /Users/jenkins/minikube-integration/19740-1141/.minikube/key.pem, removing ...
	I1001 16:45:31.887660    4927 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19740-1141/.minikube/key.pem
	I1001 16:45:31.887717    4927 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19740-1141/.minikube/key.pem (1679 bytes)
	I1001 16:45:31.887810    4927 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-342000 san=[127.0.0.1 localhost minikube stopped-upgrade-342000]
	I1001 16:45:31.969219    4927 provision.go:177] copyRemoteCerts
	I1001 16:45:31.969254    4927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 16:45:31.969262    4927 sshutil.go:53] new ssh client: &{IP:localhost Port:50486 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/id_rsa Username:docker}
	I1001 16:45:31.995080    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1001 16:45:32.002195    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 16:45:32.008860    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 16:45:32.015536    4927 provision.go:87] duration metric: took 128.605541ms to configureAuth
	I1001 16:45:32.015546    4927 buildroot.go:189] setting minikube options for container-runtime
	I1001 16:45:32.015651    4927 config.go:182] Loaded profile config "stopped-upgrade-342000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 16:45:32.015693    4927 main.go:141] libmachine: Using SSH client type: native
	I1001 16:45:32.015775    4927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f65c00] 0x102f68440 <nil>  [] 0s} localhost 50486 <nil> <nil>}
	I1001 16:45:32.015780    4927 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1001 16:45:32.065819    4927 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1001 16:45:32.065829    4927 buildroot.go:70] root file system type: tmpfs
	I1001 16:45:32.065882    4927 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1001 16:45:32.065935    4927 main.go:141] libmachine: Using SSH client type: native
	I1001 16:45:32.066047    4927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f65c00] 0x102f68440 <nil>  [] 0s} localhost 50486 <nil> <nil>}
	I1001 16:45:32.066080    4927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1001 16:45:32.120146    4927 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1001 16:45:32.120205    4927 main.go:141] libmachine: Using SSH client type: native
	I1001 16:45:32.120312    4927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f65c00] 0x102f68440 <nil>  [] 0s} localhost 50486 <nil> <nil>}
	I1001 16:45:32.120324    4927 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1001 16:45:32.467727    4927 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1001 16:45:32.467744    4927 machine.go:96] duration metric: took 788.510375ms to provisionDockerMachine
	I1001 16:45:32.467752    4927 start.go:293] postStartSetup for "stopped-upgrade-342000" (driver="qemu2")
	I1001 16:45:32.467758    4927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 16:45:32.467850    4927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 16:45:32.467864    4927 sshutil.go:53] new ssh client: &{IP:localhost Port:50486 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/id_rsa Username:docker}
	I1001 16:45:32.495725    4927 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 16:45:32.497046    4927 info.go:137] Remote host: Buildroot 2021.02.12
	I1001 16:45:32.497055    4927 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19740-1141/.minikube/addons for local assets ...
	I1001 16:45:32.497144    4927 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19740-1141/.minikube/files for local assets ...
	I1001 16:45:32.497273    4927 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/ssl/certs/16592.pem -> 16592.pem in /etc/ssl/certs
	I1001 16:45:32.497407    4927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 16:45:32.499984    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/ssl/certs/16592.pem --> /etc/ssl/certs/16592.pem (1708 bytes)
	I1001 16:45:32.507103    4927 start.go:296] duration metric: took 39.345709ms for postStartSetup
	I1001 16:45:32.507117    4927 fix.go:56] duration metric: took 20.546473334s for fixHost
	I1001 16:45:32.507153    4927 main.go:141] libmachine: Using SSH client type: native
	I1001 16:45:32.507260    4927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f65c00] 0x102f68440 <nil>  [] 0s} localhost 50486 <nil> <nil>}
	I1001 16:45:32.507266    4927 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 16:45:32.556401    4927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727826332.732110171
	
	I1001 16:45:32.556410    4927 fix.go:216] guest clock: 1727826332.732110171
	I1001 16:45:32.556414    4927 fix.go:229] Guest: 2024-10-01 16:45:32.732110171 -0700 PDT Remote: 2024-10-01 16:45:32.507119 -0700 PDT m=+20.667052084 (delta=224.991171ms)
	I1001 16:45:32.556434    4927 fix.go:200] guest clock delta is within tolerance: 224.991171ms
	I1001 16:45:32.556437    4927 start.go:83] releasing machines lock for "stopped-upgrade-342000", held for 20.595801042s
	I1001 16:45:32.556500    4927 ssh_runner.go:195] Run: cat /version.json
	I1001 16:45:32.556509    4927 sshutil.go:53] new ssh client: &{IP:localhost Port:50486 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/id_rsa Username:docker}
	I1001 16:45:32.556500    4927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 16:45:32.556540    4927 sshutil.go:53] new ssh client: &{IP:localhost Port:50486 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/id_rsa Username:docker}
	W1001 16:45:32.557028    4927 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50486: connect: connection refused
	I1001 16:45:32.557047    4927 retry.go:31] will retry after 265.718747ms: dial tcp [::1]:50486: connect: connection refused
	W1001 16:45:32.584925    4927 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1001 16:45:32.584966    4927 ssh_runner.go:195] Run: systemctl --version
	I1001 16:45:32.586741    4927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 16:45:32.588499    4927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 16:45:32.588529    4927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1001 16:45:32.591695    4927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1001 16:45:32.596437    4927 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 16:45:32.596444    4927 start.go:495] detecting cgroup driver to use...
	I1001 16:45:32.596530    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 16:45:32.602874    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1001 16:45:32.605925    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1001 16:45:32.608693    4927 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1001 16:45:32.608730    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1001 16:45:32.611990    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 16:45:32.615419    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1001 16:45:32.618543    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 16:45:32.621490    4927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 16:45:32.624226    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1001 16:45:32.627433    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1001 16:45:32.630670    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1001 16:45:32.633428    4927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 16:45:32.636157    4927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 16:45:32.639391    4927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:45:32.718158    4927 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1001 16:45:32.728559    4927 start.go:495] detecting cgroup driver to use...
	I1001 16:45:32.728658    4927 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1001 16:45:32.738429    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 16:45:32.742796    4927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 16:45:32.754939    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 16:45:32.760893    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1001 16:45:32.766798    4927 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1001 16:45:32.827092    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1001 16:45:32.835261    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 16:45:32.841370    4927 ssh_runner.go:195] Run: which cri-dockerd
	I1001 16:45:32.842637    4927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1001 16:45:32.845966    4927 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1001 16:45:32.851632    4927 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1001 16:45:32.933185    4927 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1001 16:45:33.012405    4927 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1001 16:45:33.012463    4927 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1001 16:45:33.017686    4927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:45:33.094016    4927 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1001 16:45:34.217088    4927 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.123065458s)
	I1001 16:45:34.217177    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1001 16:45:34.222196    4927 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1001 16:45:34.229466    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1001 16:45:34.234463    4927 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1001 16:45:34.314966    4927 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1001 16:45:34.393593    4927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:45:34.469713    4927 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1001 16:45:34.476423    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1001 16:45:34.481527    4927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:45:34.557501    4927 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1001 16:45:34.595987    4927 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1001 16:45:34.596087    4927 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1001 16:45:34.598602    4927 start.go:563] Will wait 60s for crictl version
	I1001 16:45:34.598663    4927 ssh_runner.go:195] Run: which crictl
	I1001 16:45:34.599965    4927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 16:45:34.615245    4927 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1001 16:45:34.615327    4927 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1001 16:45:34.630839    4927 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1001 16:45:34.651259    4927 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1001 16:45:34.651342    4927 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1001 16:45:34.652764    4927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 16:45:34.656421    4927 kubeadm.go:883] updating cluster {Name:stopped-upgrade-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50522 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1001 16:45:34.656482    4927 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1001 16:45:34.656535    4927 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1001 16:45:34.666725    4927 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1001 16:45:34.666734    4927 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1001 16:45:34.666790    4927 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1001 16:45:34.670139    4927 ssh_runner.go:195] Run: which lz4
	I1001 16:45:34.671431    4927 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 16:45:34.672621    4927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 16:45:34.672630    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1001 16:45:35.548178    4927 docker.go:649] duration metric: took 876.794ms to copy over tarball
	I1001 16:45:35.548250    4927 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 16:45:36.686843    4927 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.138590792s)
	I1001 16:45:36.686857    4927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 16:45:36.702706    4927 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1001 16:45:36.705715    4927 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1001 16:45:36.711057    4927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:45:36.776526    4927 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1001 16:45:33.552168    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:45:33.552451    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:45:33.577399    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:45:33.577536    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:45:33.594499    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:45:33.594602    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:45:33.606871    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:45:33.606952    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:45:33.617462    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:45:33.617536    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:45:33.628119    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:45:33.628204    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:45:33.638733    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:45:33.638810    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:45:33.650818    4804 logs.go:282] 0 containers: []
	W1001 16:45:33.650830    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:45:33.650904    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:45:33.661211    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:45:33.661227    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:45:33.661231    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:45:33.679083    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:45:33.679093    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:45:33.694353    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:45:33.694362    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:45:33.705672    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:45:33.705684    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:45:33.719616    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:45:33.719625    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:45:33.756013    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:45:33.756029    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:45:33.767954    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:45:33.767965    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:45:33.778785    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:45:33.778795    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:45:33.790361    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:45:33.790376    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:45:33.830090    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:45:33.830188    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:45:33.830469    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:45:33.830478    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:45:33.852259    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:45:33.852272    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:45:33.865983    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:45:33.865996    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:45:33.880101    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:45:33.880111    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:45:33.900602    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:45:33.900618    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:45:33.905441    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:45:33.905447    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:45:33.917127    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:45:33.917137    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:45:33.942124    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:45:33.942131    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:45:33.953347    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:45:33.953358    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:45:33.953383    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:45:33.953388    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:45:33.953392    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:45:33.953397    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:45:33.953400    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:45:38.273381    4927 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.496854583s)
	I1001 16:45:38.273486    4927 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1001 16:45:38.284683    4927 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1001 16:45:38.284695    4927 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1001 16:45:38.284700    4927 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1001 16:45:38.288760    4927 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:45:38.290493    4927 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 16:45:38.292572    4927 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 16:45:38.292738    4927 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:45:38.294477    4927 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 16:45:38.294612    4927 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 16:45:38.295634    4927 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 16:45:38.295695    4927 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 16:45:38.296968    4927 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1001 16:45:38.296986    4927 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 16:45:38.298085    4927 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1001 16:45:38.298175    4927 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 16:45:38.299464    4927 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 16:45:38.299610    4927 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1001 16:45:38.300571    4927 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1001 16:45:38.301997    4927 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	W1001 16:45:40.229921    4927 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1001 16:45:40.230697    4927 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1001 16:45:40.270976    4927 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1001 16:45:40.271032    4927 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 16:45:40.271167    4927 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1001 16:45:40.291262    4927 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1001 16:45:40.291417    4927 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1001 16:45:40.294128    4927 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1001 16:45:40.294155    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1001 16:45:40.318929    4927 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 16:45:40.339840    4927 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1001 16:45:40.339855    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1001 16:45:40.345749    4927 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1001 16:45:40.345783    4927 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 16:45:40.345852    4927 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 16:45:40.362482    4927 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1001 16:45:40.363795    4927 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1001 16:45:40.397833    4927 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1001 16:45:40.397866    4927 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1001 16:45:40.397869    4927 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1001 16:45:40.397887    4927 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1001 16:45:40.397930    4927 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1001 16:45:40.397941    4927 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1001 16:45:40.397947    4927 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1001 16:45:40.397977    4927 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1001 16:45:40.411967    4927 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1001 16:45:40.411977    4927 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1001 16:45:40.412105    4927 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1001 16:45:40.413537    4927 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1001 16:45:40.413550    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1001 16:45:40.420297    4927 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1001 16:45:40.420309    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1001 16:45:40.448700    4927 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1001 16:45:40.660521    4927 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1001 16:45:40.660725    4927 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:45:40.677739    4927 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1001 16:45:40.677770    4927 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:45:40.677851    4927 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:45:40.695116    4927 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1001 16:45:40.695259    4927 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1001 16:45:40.696942    4927 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1001 16:45:40.696954    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1001 16:45:40.724587    4927 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1001 16:45:40.724600    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1001 16:45:40.909103    4927 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1001 16:45:40.912793    4927 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1001 16:45:40.914576    4927 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1001 16:45:40.975740    4927 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1001 16:45:40.975782    4927 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1001 16:45:40.975800    4927 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1001 16:45:40.975805    4927 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 16:45:40.975814    4927 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 16:45:40.975875    4927 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1001 16:45:40.975875    4927 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1001 16:45:40.975935    4927 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1001 16:45:40.975951    4927 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 16:45:40.975985    4927 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1001 16:45:40.994739    4927 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1001 16:45:40.995101    4927 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1001 16:45:40.995112    4927 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1001 16:45:40.995134    4927 cache_images.go:92] duration metric: took 2.710455292s to LoadCachedImages
	W1001 16:45:40.995173    4927 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I1001 16:45:40.995179    4927 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1001 16:45:40.995235    4927 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-342000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 16:45:40.995304    4927 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1001 16:45:41.008342    4927 cni.go:84] Creating CNI manager for ""
	I1001 16:45:41.008354    4927 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:45:41.008362    4927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 16:45:41.008371    4927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-342000 NodeName:stopped-upgrade-342000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 16:45:41.008438    4927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-342000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 16:45:41.008508    4927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1001 16:45:41.011619    4927 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 16:45:41.011648    4927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 16:45:41.014770    4927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1001 16:45:41.019996    4927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 16:45:41.025060    4927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1001 16:45:41.030387    4927 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1001 16:45:41.031535    4927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 16:45:41.035548    4927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:45:41.117115    4927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 16:45:41.122676    4927 certs.go:68] Setting up /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000 for IP: 10.0.2.15
	I1001 16:45:41.122693    4927 certs.go:194] generating shared ca certs ...
	I1001 16:45:41.122702    4927 certs.go:226] acquiring lock for ca certs: {Name:mk74f46ad151665c6dd5cd39311b967c23e44dd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:45:41.122874    4927 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.key
	I1001 16:45:41.122924    4927 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/proxy-client-ca.key
	I1001 16:45:41.122931    4927 certs.go:256] generating profile certs ...
	I1001 16:45:41.123004    4927 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/client.key
	I1001 16:45:41.123021    4927 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.key.1a19673b
	I1001 16:45:41.123038    4927 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.crt.1a19673b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1001 16:45:41.197715    4927 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.crt.1a19673b ...
	I1001 16:45:41.197726    4927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.crt.1a19673b: {Name:mkf7b2bb4b2a9fc3a2ac37e52595639f961ffa70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:45:41.198038    4927 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.key.1a19673b ...
	I1001 16:45:41.198043    4927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.key.1a19673b: {Name:mkd560aa46ee4338eb0dc86c953bbc4e16a7d889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:45:41.198170    4927 certs.go:381] copying /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.crt.1a19673b -> /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.crt
	I1001 16:45:41.198370    4927 certs.go:385] copying /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.key.1a19673b -> /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.key
	I1001 16:45:41.198543    4927 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/proxy-client.key
	I1001 16:45:41.198672    4927 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/1659.pem (1338 bytes)
	W1001 16:45:41.198706    4927 certs.go:480] ignoring /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/1659_empty.pem, impossibly tiny 0 bytes
	I1001 16:45:41.198712    4927 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 16:45:41.198739    4927 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem (1078 bytes)
	I1001 16:45:41.198764    4927 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem (1123 bytes)
	I1001 16:45:41.198788    4927 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/key.pem (1679 bytes)
	I1001 16:45:41.198839    4927 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/ssl/certs/16592.pem (1708 bytes)
	I1001 16:45:41.199210    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 16:45:41.206211    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 16:45:41.212546    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 16:45:41.219654    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1001 16:45:41.227050    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1001 16:45:41.234249    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 16:45:41.240839    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 16:45:41.247626    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 16:45:41.255103    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/1659.pem --> /usr/share/ca-certificates/1659.pem (1338 bytes)
	I1001 16:45:41.261920    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/ssl/certs/16592.pem --> /usr/share/ca-certificates/16592.pem (1708 bytes)
	I1001 16:45:41.268268    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 16:45:41.275343    4927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 16:45:41.280264    4927 ssh_runner.go:195] Run: openssl version
	I1001 16:45:41.282259    4927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 16:45:41.285150    4927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 16:45:41.286545    4927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I1001 16:45:41.286578    4927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 16:45:41.288409    4927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 16:45:41.291690    4927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1659.pem && ln -fs /usr/share/ca-certificates/1659.pem /etc/ssl/certs/1659.pem"
	I1001 16:45:41.294835    4927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1659.pem
	I1001 16:45:41.296132    4927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:04 /usr/share/ca-certificates/1659.pem
	I1001 16:45:41.296160    4927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1659.pem
	I1001 16:45:41.297872    4927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1659.pem /etc/ssl/certs/51391683.0"
	I1001 16:45:41.300581    4927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16592.pem && ln -fs /usr/share/ca-certificates/16592.pem /etc/ssl/certs/16592.pem"
	I1001 16:45:41.303869    4927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16592.pem
	I1001 16:45:41.305348    4927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:04 /usr/share/ca-certificates/16592.pem
	I1001 16:45:41.305374    4927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16592.pem
	I1001 16:45:41.307078    4927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16592.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 16:45:41.309933    4927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 16:45:41.311207    4927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 16:45:41.313199    4927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 16:45:41.314989    4927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 16:45:41.317042    4927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 16:45:41.318866    4927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 16:45:41.320610    4927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 16:45:41.322452    4927 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50522 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 16:45:41.322531    4927 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1001 16:45:41.333560    4927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 16:45:41.336765    4927 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 16:45:41.336776    4927 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1001 16:45:41.336806    4927 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 16:45:41.340708    4927 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 16:45:41.341018    4927 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-342000" does not appear in /Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:45:41.341113    4927 kubeconfig.go:62] /Users/jenkins/minikube-integration/19740-1141/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-342000" cluster setting kubeconfig missing "stopped-upgrade-342000" context setting]
	I1001 16:45:41.341284    4927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/kubeconfig: {Name:mk6821adb20f42e2e1842a7c6bcaf1ce77531dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:45:41.341722    4927 kapi.go:59] client config for stopped-upgrade-342000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/client.key", CAFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10453e5d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 16:45:41.342079    4927 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 16:45:41.344787    4927 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-342000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1001 16:45:41.344793    4927 kubeadm.go:1160] stopping kube-system containers ...
	I1001 16:45:41.344846    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1001 16:45:41.356416    4927 docker.go:483] Stopping containers: [4d26939b1517 b15f8da6832d b67cc8c69187 9f884cce7c0d da81e837a710 0e7521e8098a ef3ee586a96a b7ab46fee4d3]
	I1001 16:45:41.356490    4927 ssh_runner.go:195] Run: docker stop 4d26939b1517 b15f8da6832d b67cc8c69187 9f884cce7c0d da81e837a710 0e7521e8098a ef3ee586a96a b7ab46fee4d3
	I1001 16:45:41.368010    4927 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 16:45:41.374245    4927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 16:45:41.377103    4927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 16:45:41.377109    4927 kubeadm.go:157] found existing configuration files:
	
	I1001 16:45:41.377137    4927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/admin.conf
	I1001 16:45:41.379821    4927 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 16:45:41.379850    4927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 16:45:41.383044    4927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/kubelet.conf
	I1001 16:45:41.385917    4927 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 16:45:41.385944    4927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 16:45:41.388679    4927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/controller-manager.conf
	I1001 16:45:41.391514    4927 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 16:45:41.391540    4927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 16:45:41.394466    4927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/scheduler.conf
	I1001 16:45:41.396795    4927 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 16:45:41.396822    4927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 16:45:41.399687    4927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 16:45:41.402740    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:45:41.426556    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:45:42.238289    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:45:42.365162    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:45:42.389427    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:45:42.414528    4927 api_server.go:52] waiting for apiserver process to appear ...
	I1001 16:45:42.414620    4927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:45:42.916758    4927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:45:43.416668    4927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:45:43.421412    4927 api_server.go:72] duration metric: took 1.006895958s to wait for apiserver process to appear ...
	I1001 16:45:43.421423    4927 api_server.go:88] waiting for apiserver healthz status ...
	I1001 16:45:43.421443    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:45:43.957375    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:45:48.423210    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:45:48.423308    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:45:48.958046    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:45:48.958184    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:45:48.969378    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:45:48.969464    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:45:48.980842    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:45:48.980927    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:45:48.991378    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:45:48.991464    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:45:49.001725    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:45:49.001813    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:45:49.011939    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:45:49.012010    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:45:49.022730    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:45:49.022806    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:45:49.036452    4804 logs.go:282] 0 containers: []
	W1001 16:45:49.036464    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:45:49.036537    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:45:49.047690    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:45:49.047706    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:45:49.047713    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:45:49.085486    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:45:49.085586    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:45:49.085871    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:45:49.085880    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:45:49.104754    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:45:49.104765    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:45:49.128353    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:45:49.128368    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:45:49.140057    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:45:49.140068    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:45:49.144941    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:45:49.144949    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:45:49.181508    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:45:49.181520    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:45:49.195949    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:45:49.195962    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:45:49.211277    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:45:49.211290    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:45:49.225626    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:45:49.225642    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:45:49.237197    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:45:49.237210    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:45:49.248949    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:45:49.248962    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:45:49.260181    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:45:49.260190    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:45:49.275044    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:45:49.275060    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:45:49.297278    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:45:49.297289    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:45:49.309076    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:45:49.309090    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:45:49.334200    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:45:49.334209    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:45:49.345660    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:45:49.345670    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:45:49.345699    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:45:49.345703    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:45:49.345707    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:45:49.345710    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:45:49.345713    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:45:53.424034    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:45:53.424068    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:45:58.424457    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:45:58.424525    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:45:59.347823    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:03.425429    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:03.425472    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:04.350148    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:04.350654    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:46:04.387665    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:46:04.387828    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:46:04.412055    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:46:04.412166    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:46:04.426774    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:46:04.426865    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:46:04.440177    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:46:04.440267    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:46:04.450988    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:46:04.451073    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:46:04.461988    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:46:04.462072    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:46:04.473067    4804 logs.go:282] 0 containers: []
	W1001 16:46:04.473082    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:46:04.473160    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:46:04.484206    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:46:04.484224    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:46:04.484229    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:46:04.502155    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:46:04.502167    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:46:04.526105    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:46:04.526112    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:46:04.562661    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:46:04.562672    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:46:04.577150    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:46:04.577167    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:46:04.601250    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:46:04.601265    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:46:04.640984    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:46:04.641085    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:46:04.641373    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:46:04.641380    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:46:04.661274    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:46:04.661288    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:46:04.677337    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:46:04.677347    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:46:04.689222    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:46:04.689234    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:46:04.701738    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:46:04.701749    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:46:04.714115    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:46:04.714130    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:46:04.718605    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:46:04.718615    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:46:04.731979    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:46:04.731995    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:46:04.743946    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:46:04.743958    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:46:04.755381    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:46:04.755394    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:46:04.769865    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:46:04.769875    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:46:04.785025    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:46:04.785038    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:46:04.785063    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:46:04.785067    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:46:04.785071    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:46:04.785074    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:46:04.785076    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:46:08.426432    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:08.426482    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:13.427643    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:13.427701    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:14.789157    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:18.429344    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:18.429425    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:19.791205    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:19.791681    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:46:19.824374    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:46:19.824517    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:46:19.841372    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:46:19.841480    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:46:19.858382    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:46:19.858454    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:46:19.869188    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:46:19.869271    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:46:19.879816    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:46:19.879906    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:46:19.897951    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:46:19.898037    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:46:19.914539    4804 logs.go:282] 0 containers: []
	W1001 16:46:19.914550    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:46:19.914620    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:46:19.925711    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:46:19.925728    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:46:19.925734    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:46:19.930327    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:46:19.930337    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:46:19.943619    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:46:19.943629    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:46:19.958033    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:46:19.958043    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:46:19.970445    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:46:19.970456    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:46:19.987757    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:46:19.987771    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:46:20.024966    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:46:20.025057    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:46:20.025330    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:46:20.025335    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:46:20.036083    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:46:20.036099    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:46:20.051088    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:46:20.051098    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:46:20.062637    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:46:20.062647    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:46:20.076059    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:46:20.076069    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:46:20.093354    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:46:20.093365    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:46:20.127293    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:46:20.127304    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:46:20.146821    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:46:20.146836    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:46:20.159347    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:46:20.159359    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:46:20.171805    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:46:20.171818    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:46:20.197020    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:46:20.197027    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:46:20.209411    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:46:20.209428    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:46:20.209459    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:46:20.209464    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:46:20.209467    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:46:20.209472    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:46:20.209475    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:46:23.430333    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:23.430429    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:28.433082    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:28.433170    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:30.211695    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:33.435745    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:33.435792    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:35.214323    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:35.214828    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:46:35.250538    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:46:35.250710    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:46:35.271178    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:46:35.271309    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:46:35.290861    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:46:35.290956    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:46:35.302580    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:46:35.302672    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:46:35.313191    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:46:35.313273    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:46:35.324103    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:46:35.324187    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:46:35.334330    4804 logs.go:282] 0 containers: []
	W1001 16:46:35.334341    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:46:35.334412    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:46:35.348245    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:46:35.348263    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:46:35.348269    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:46:35.362701    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:46:35.362712    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:46:35.373591    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:46:35.373602    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:46:35.385123    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:46:35.385133    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:46:35.397069    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:46:35.397084    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:46:35.434340    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:46:35.434449    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:46:35.434738    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:46:35.434743    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:46:35.449135    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:46:35.449145    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:46:35.466202    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:46:35.466213    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:46:35.484769    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:46:35.484781    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:46:35.500248    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:46:35.500258    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:46:35.505015    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:46:35.505022    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:46:35.538924    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:46:35.538936    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:46:35.553515    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:46:35.553527    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:46:35.576966    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:46:35.576973    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:46:35.595913    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:46:35.595923    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:46:35.609084    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:46:35.609095    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:46:35.627517    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:46:35.627527    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:46:35.639544    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:46:35.639560    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:46:35.639592    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:46:35.639597    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:46:35.639602    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:46:35.639605    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:46:35.639609    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:46:38.438179    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:38.438245    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:43.440584    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:43.440762    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:46:43.452099    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:46:43.452194    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:46:43.462646    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:46:43.462733    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:46:43.472873    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:46:43.472965    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:46:43.488288    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:46:43.488374    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:46:43.498574    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:46:43.498666    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:46:43.509432    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:46:43.509516    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:46:43.519492    4927 logs.go:282] 0 containers: []
	W1001 16:46:43.519521    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:46:43.519591    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:46:43.530097    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:46:43.530116    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:46:43.530121    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:46:43.555391    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:46:43.555398    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:46:43.559339    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:46:43.559353    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:46:43.602513    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:46:43.602526    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:46:43.614382    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:46:43.614391    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:46:43.631130    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:46:43.631141    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:46:43.644092    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:46:43.644106    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:46:43.655145    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:46:43.655160    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:46:43.667053    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:46:43.667064    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:46:43.705550    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:46:43.705558    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:46:43.719082    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:46:43.719092    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:46:43.732871    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:46:43.732882    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:46:43.755300    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:46:43.755315    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:46:43.766788    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:46:43.766799    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:46:43.784984    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:46:43.784997    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:46:43.863703    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:46:43.863715    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:46:43.878651    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:46:43.878664    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:46:46.396409    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:45.643713    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:51.399031    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:51.399188    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:46:51.414611    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:46:51.414712    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:46:51.427289    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:46:51.427383    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:46:51.439204    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:46:51.439289    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:46:51.450116    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:46:51.450203    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:46:51.460495    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:46:51.460576    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:46:51.470821    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:46:51.470918    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:46:51.481519    4927 logs.go:282] 0 containers: []
	W1001 16:46:51.481530    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:46:51.481601    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:46:51.493829    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:46:51.493847    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:46:51.493853    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:46:51.510507    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:46:51.510518    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:46:51.536646    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:46:51.536654    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:46:51.548020    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:46:51.548030    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:46:51.565791    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:46:51.565801    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:46:51.579412    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:46:51.579423    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:46:51.593792    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:46:51.593806    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:46:51.605952    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:46:51.605964    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:46:51.643338    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:46:51.643348    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:46:51.654806    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:46:51.654820    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:46:51.667132    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:46:51.667143    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:46:51.671341    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:46:51.671348    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:46:51.706432    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:46:51.706446    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:46:51.720421    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:46:51.720435    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:46:51.731465    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:46:51.731476    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:46:51.768892    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:46:51.768902    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:46:51.781597    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:46:51.781607    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:46:50.646077    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:50.646431    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:46:50.674138    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:46:50.674284    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:46:50.691109    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:46:50.691213    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:46:50.707190    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:46:50.707272    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:46:50.717719    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:46:50.717808    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:46:50.727966    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:46:50.728044    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:46:50.738696    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:46:50.738783    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:46:50.749086    4804 logs.go:282] 0 containers: []
	W1001 16:46:50.749097    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:46:50.749167    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:46:50.759528    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:46:50.759546    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:46:50.759552    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:46:50.773417    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:46:50.773426    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:46:50.789490    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:46:50.789500    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:46:50.810967    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:46:50.810990    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:46:50.830118    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:46:50.830129    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:46:50.849758    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:46:50.849767    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:46:50.861032    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:46:50.861042    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:46:50.872877    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:46:50.872887    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:46:50.884228    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:46:50.884238    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:46:50.902253    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:46:50.902262    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:46:50.914010    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:46:50.914020    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:46:50.953225    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:46:50.953317    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:46:50.953611    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:46:50.953619    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:46:50.958033    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:46:50.958041    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:46:50.994265    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:46:50.994278    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:46:51.017269    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:46:51.017277    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:46:51.031707    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:46:51.031717    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:46:51.047049    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:46:51.047060    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:46:51.058233    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:46:51.058247    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:46:51.058281    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:46:51.058285    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:46:51.058289    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:46:51.058293    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:46:51.058297    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:46:54.295133    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:59.297530    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:59.298051    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:46:59.331892    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:46:59.332055    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:46:59.350034    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:46:59.350152    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:46:59.364404    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:46:59.364483    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:46:59.375768    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:46:59.375859    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:46:59.386471    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:46:59.386551    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:46:59.398474    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:46:59.398556    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:46:59.408416    4927 logs.go:282] 0 containers: []
	W1001 16:46:59.408429    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:46:59.408501    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:46:59.424869    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:46:59.424888    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:46:59.424895    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:46:59.461982    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:46:59.461993    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:46:59.497715    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:46:59.497726    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:46:59.535428    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:46:59.535439    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:46:59.547411    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:46:59.547426    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:46:59.561927    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:46:59.561939    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:46:59.579558    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:46:59.579570    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:46:59.591024    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:46:59.591037    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:46:59.602474    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:46:59.602486    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:46:59.628323    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:46:59.628331    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:46:59.642071    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:46:59.642081    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:46:59.656504    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:46:59.656515    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:46:59.667751    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:46:59.667764    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:46:59.694465    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:46:59.694476    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:46:59.699018    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:46:59.699025    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:46:59.711070    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:46:59.711085    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:46:59.726386    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:46:59.726395    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:01.062340    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:02.240530    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:06.064669    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:06.065020    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:06.090876    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:47:06.091020    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:06.108916    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:47:06.109020    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:06.121937    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:47:06.122031    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:06.137619    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:47:06.137708    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:06.147729    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:47:06.147803    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:06.157810    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:47:06.157884    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:06.168168    4804 logs.go:282] 0 containers: []
	W1001 16:47:06.168179    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:06.168252    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:06.178618    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:47:06.178634    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:47:06.178638    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:47:06.192297    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:47:06.192306    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:47:06.203419    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:47:06.203429    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:47:06.216700    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:06.216711    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:47:06.254303    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:47:06.254394    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:47:06.254668    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:47:06.254672    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:47:06.269178    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:47:06.269189    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:47:06.288228    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:47:06.288241    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:47:06.309412    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:47:06.309424    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:47:06.325524    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:06.325535    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:06.330170    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:47:06.330179    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:47:06.344449    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:47:06.344461    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:47:06.362022    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:47:06.362034    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:47:06.380443    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:47:06.380453    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:06.392488    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:47:06.392504    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:47:06.407001    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:47:06.407011    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:47:06.419356    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:06.419364    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:06.445121    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:06.445132    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:06.481988    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:47:06.482003    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:47:06.482033    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:47:06.482039    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:47:06.482043    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:47:06.482047    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:47:06.482050    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:47:07.242838    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:07.243010    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:07.257796    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:47:07.257893    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:07.270489    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:47:07.270570    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:07.283053    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:47:07.283136    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:07.294148    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:47:07.294239    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:07.305018    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:47:07.305099    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:07.316144    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:47:07.316225    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:07.326727    4927 logs.go:282] 0 containers: []
	W1001 16:47:07.326739    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:07.326812    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:07.337483    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:47:07.337504    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:07.337509    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:07.342308    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:07.342316    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:07.378258    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:47:07.378270    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:47:07.397395    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:47:07.397411    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:47:07.408614    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:07.408626    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:47:07.447658    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:47:07.447668    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:47:07.461548    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:47:07.461563    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:47:07.503955    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:47:07.503966    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:47:07.520458    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:47:07.520470    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:47:07.533095    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:47:07.533105    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:07.545283    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:47:07.545299    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:47:07.559499    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:47:07.559510    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:47:07.574570    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:47:07.574582    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:47:07.593645    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:47:07.593656    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:47:07.610868    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:47:07.610880    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:47:07.624666    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:47:07.624677    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:47:07.637004    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:07.637019    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:10.162793    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:15.165172    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:15.165662    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:15.196958    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:47:15.197115    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:15.221955    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:47:15.222067    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:15.235976    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:47:15.236067    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:15.247116    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:47:15.247199    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:15.257790    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:47:15.257864    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:15.268980    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:47:15.269066    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:15.279214    4927 logs.go:282] 0 containers: []
	W1001 16:47:15.279224    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:15.279289    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:15.290691    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:47:15.290708    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:47:15.290713    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:47:15.308306    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:47:15.308317    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:47:15.319940    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:15.319953    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:15.345025    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:47:15.345034    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:47:15.359556    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:15.359567    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:15.363775    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:15.363783    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:15.398138    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:47:15.398148    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:47:15.409429    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:15.409441    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:47:15.448416    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:47:15.448425    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:47:15.488011    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:47:15.488023    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:47:15.502573    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:47:15.502582    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:47:15.514645    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:47:15.514659    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:47:15.532696    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:47:15.532707    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:47:15.544713    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:47:15.544723    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:47:15.562076    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:47:15.562088    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:47:15.574015    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:47:15.574027    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:47:15.588322    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:47:15.588339    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:16.486119    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:18.100774    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:21.487777    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:21.488016    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:21.500307    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:47:21.500397    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:21.511575    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:47:21.511659    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:21.522376    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:47:21.522462    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:21.533464    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:47:21.533546    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:21.544016    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:47:21.544096    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:21.554559    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:47:21.554632    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:21.564895    4804 logs.go:282] 0 containers: []
	W1001 16:47:21.564906    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:21.564979    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:21.579638    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:47:21.579656    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:47:21.579662    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:47:21.596384    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:47:21.596393    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:47:21.608116    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:47:21.608125    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:47:21.619545    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:47:21.619559    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:47:21.633599    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:47:21.633614    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:47:21.647540    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:47:21.647554    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:47:21.658232    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:47:21.658242    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:47:21.669876    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:47:21.669884    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:47:21.681416    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:47:21.681424    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:21.693046    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:47:21.693057    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:47:21.712430    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:21.712445    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:21.736438    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:21.736445    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:21.741017    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:47:21.741022    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:47:21.762046    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:47:21.762060    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:47:21.779098    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:21.779112    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:47:21.817531    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:47:21.817623    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:47:21.817895    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:21.817899    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:21.851171    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:47:21.851180    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:47:21.865682    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:47:21.865691    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:47:21.865721    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:47:21.865724    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:47:21.865728    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:47:21.865732    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:47:21.865736    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:47:23.103038    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:23.103245    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:23.132039    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:47:23.132153    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:23.147041    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:47:23.147147    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:23.161736    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:47:23.161818    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:23.172451    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:47:23.172542    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:23.182790    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:47:23.182871    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:23.193549    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:47:23.193622    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:23.203242    4927 logs.go:282] 0 containers: []
	W1001 16:47:23.203256    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:23.203328    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:23.218584    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:47:23.218604    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:47:23.218610    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:47:23.232744    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:47:23.232758    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:47:23.243423    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:47:23.243435    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:47:23.259631    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:47:23.259648    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:47:23.272004    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:23.272014    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:23.276088    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:47:23.276094    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:47:23.290472    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:47:23.290486    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:47:23.304524    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:47:23.304537    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:47:23.316451    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:47:23.316468    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:47:23.328787    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:23.328798    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:23.354615    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:47:23.354623    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:23.366578    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:47:23.366589    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:47:23.386518    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:47:23.386534    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:47:23.398708    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:23.398719    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:47:23.437944    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:23.437955    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:23.473467    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:47:23.473481    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:47:23.511152    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:47:23.511170    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:47:26.023753    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:31.024599    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:31.025108    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:31.058150    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:47:31.058309    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:31.078026    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:47:31.078148    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:31.092295    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:47:31.092388    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:31.104076    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:47:31.104163    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:31.114997    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:47:31.115079    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:31.133466    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:47:31.133554    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:31.143519    4927 logs.go:282] 0 containers: []
	W1001 16:47:31.143531    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:31.143601    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:31.155427    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:47:31.155445    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:47:31.155450    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:31.168221    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:47:31.168234    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:47:31.183267    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:47:31.183280    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:47:31.196257    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:47:31.196269    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:47:31.207597    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:47:31.207608    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:47:31.220486    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:47:31.220499    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:47:31.234312    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:47:31.234322    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:47:31.246456    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:31.246467    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:31.271193    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:31.271202    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:31.275332    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:31.275338    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:31.310620    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:47:31.310631    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:47:31.322559    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:47:31.322571    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:47:31.340181    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:47:31.340197    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:47:31.362740    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:31.362751    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:47:31.399637    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:47:31.399649    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:47:31.437696    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:47:31.437709    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:47:31.452820    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:47:31.452830    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:47:31.869792    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:33.966601    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:36.872157    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:36.872647    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:36.911817    4804 logs.go:282] 2 containers: [a2fc4e9b0aa3 878e5dcff978]
	I1001 16:47:36.911977    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:36.934591    4804 logs.go:282] 2 containers: [9c7399541e2a c7e4b32a30f5]
	I1001 16:47:36.934707    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:36.951099    4804 logs.go:282] 1 containers: [7a6da3f7730b]
	I1001 16:47:36.951186    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:36.963809    4804 logs.go:282] 2 containers: [ebd500e04a70 7f3704770814]
	I1001 16:47:36.963889    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:36.975315    4804 logs.go:282] 1 containers: [2b0305fbc022]
	I1001 16:47:36.975390    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:36.985951    4804 logs.go:282] 2 containers: [8bb9a95603f1 94e8647254fc]
	I1001 16:47:36.986026    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:36.999698    4804 logs.go:282] 0 containers: []
	W1001 16:47:36.999709    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:36.999785    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:37.010283    4804 logs.go:282] 2 containers: [786727a48935 c113ebb55282]
	I1001 16:47:37.010300    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:37.010306    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:37.045152    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:37.045168    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:37.067891    4804 logs.go:123] Gathering logs for kube-apiserver [a2fc4e9b0aa3] ...
	I1001 16:47:37.067899    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2fc4e9b0aa3"
	I1001 16:47:37.082588    4804 logs.go:123] Gathering logs for etcd [c7e4b32a30f5] ...
	I1001 16:47:37.082598    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c7e4b32a30f5"
	I1001 16:47:37.099870    4804 logs.go:123] Gathering logs for coredns [7a6da3f7730b] ...
	I1001 16:47:37.099879    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7a6da3f7730b"
	I1001 16:47:37.111289    4804 logs.go:123] Gathering logs for kube-scheduler [ebd500e04a70] ...
	I1001 16:47:37.111304    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd500e04a70"
	I1001 16:47:37.122798    4804 logs.go:123] Gathering logs for kube-scheduler [7f3704770814] ...
	I1001 16:47:37.122809    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f3704770814"
	I1001 16:47:37.137347    4804 logs.go:123] Gathering logs for storage-provisioner [c113ebb55282] ...
	I1001 16:47:37.137358    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c113ebb55282"
	I1001 16:47:37.149418    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:47:37.149430    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:37.160820    4804 logs.go:123] Gathering logs for kube-proxy [2b0305fbc022] ...
	I1001 16:47:37.160835    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b0305fbc022"
	I1001 16:47:37.172452    4804 logs.go:123] Gathering logs for kube-controller-manager [8bb9a95603f1] ...
	I1001 16:47:37.172466    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8bb9a95603f1"
	I1001 16:47:37.189978    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:37.189991    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:47:37.229536    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:47:37.229638    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:47:37.229932    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:37.229940    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:37.234521    4804 logs.go:123] Gathering logs for kube-apiserver [878e5dcff978] ...
	I1001 16:47:37.234527    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 878e5dcff978"
	I1001 16:47:37.254557    4804 logs.go:123] Gathering logs for etcd [9c7399541e2a] ...
	I1001 16:47:37.254571    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c7399541e2a"
	I1001 16:47:37.268443    4804 logs.go:123] Gathering logs for kube-controller-manager [94e8647254fc] ...
	I1001 16:47:37.268456    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e8647254fc"
	I1001 16:47:37.283914    4804 logs.go:123] Gathering logs for storage-provisioner [786727a48935] ...
	I1001 16:47:37.283924    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 786727a48935"
	I1001 16:47:37.295902    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:47:37.295916    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:47:37.295942    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:47:37.295947    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:47:37.296007    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:47:37.296013    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:47:37.296018    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:47:38.969000    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:38.969401    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:39.006841    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:47:39.007010    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:39.026022    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:47:39.026140    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:39.039936    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:47:39.040032    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:39.051948    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:47:39.052023    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:39.062597    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:47:39.062680    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:39.073659    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:47:39.073742    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:39.085434    4927 logs.go:282] 0 containers: []
	W1001 16:47:39.085446    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:39.085518    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:39.095909    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:47:39.095933    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:39.095938    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:39.100080    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:47:39.100088    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:47:39.113688    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:47:39.113700    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:47:39.151881    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:47:39.151892    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:47:39.166123    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:47:39.166134    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:47:39.177567    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:47:39.177579    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:47:39.194199    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:47:39.194211    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:47:39.211732    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:47:39.211748    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:47:39.224357    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:39.224368    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:39.248979    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:39.249009    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:47:39.286155    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:47:39.286168    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:47:39.299896    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:47:39.299908    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:47:39.311154    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:47:39.311167    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:47:39.327502    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:39.327518    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:39.362170    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:47:39.362186    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:47:39.374172    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:47:39.374183    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:47:39.385182    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:47:39.385193    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:41.899260    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:47.298930    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:46.901607    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:46.901895    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:46.927883    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:47:46.928011    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:46.944787    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:47:46.944897    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:46.957866    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:47:46.957958    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:46.970095    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:47:46.970182    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:46.980558    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:47:46.980632    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:46.990877    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:47:46.990948    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:47.000893    4927 logs.go:282] 0 containers: []
	W1001 16:47:47.000904    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:47.000977    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:47.012298    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:47:47.012317    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:47:47.012323    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:47:47.023909    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:47.023922    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:47.048120    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:47.048127    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:47.086237    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:47:47.086249    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:47:47.097455    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:47:47.097470    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:47:47.109227    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:47:47.109243    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:47:47.124045    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:47:47.124056    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:47:47.142081    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:47:47.142092    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:47:47.154901    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:47.154912    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:47.159255    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:47:47.159263    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:47:47.195827    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:47:47.195838    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:47:47.212848    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:47:47.212865    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:47:47.226522    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:47:47.226533    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:47.238645    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:47.238654    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:47:47.276330    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:47:47.276347    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:47:47.288040    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:47:47.288058    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:47:47.304822    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:47:47.304833    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:47:49.825019    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:52.301171    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:52.301333    4804 kubeadm.go:597] duration metric: took 4m7.995156667s to restartPrimaryControlPlane
	W1001 16:47:52.301511    4804 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 16:47:52.301567    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1001 16:47:53.279278    4804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 16:47:53.284208    4804 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 16:47:53.287193    4804 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 16:47:53.289753    4804 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 16:47:53.289760    4804 kubeadm.go:157] found existing configuration files:
	
	I1001 16:47:53.289783    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/admin.conf
	I1001 16:47:53.292244    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 16:47:53.292268    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 16:47:53.295532    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/kubelet.conf
	I1001 16:47:53.298468    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 16:47:53.298500    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 16:47:53.301033    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/controller-manager.conf
	I1001 16:47:53.303998    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 16:47:53.304019    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 16:47:53.306923    4804 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/scheduler.conf
	I1001 16:47:53.309482    4804 kubeadm.go:163] "https://control-plane.minikube.internal:50304" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50304 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 16:47:53.309508    4804 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 16:47:53.312268    4804 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 16:47:53.330430    4804 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1001 16:47:53.330514    4804 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 16:47:53.379587    4804 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 16:47:53.379653    4804 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 16:47:53.379697    4804 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 16:47:53.429808    4804 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 16:47:53.433990    4804 out.go:235]   - Generating certificates and keys ...
	I1001 16:47:53.434025    4804 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 16:47:53.434081    4804 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 16:47:53.434219    4804 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 16:47:53.434254    4804 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 16:47:53.434294    4804 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 16:47:53.434325    4804 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 16:47:53.434360    4804 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 16:47:53.434396    4804 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 16:47:53.434501    4804 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 16:47:53.434611    4804 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 16:47:53.434711    4804 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 16:47:53.434747    4804 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 16:47:53.543588    4804 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 16:47:53.662582    4804 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 16:47:53.766632    4804 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 16:47:53.814676    4804 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 16:47:53.843001    4804 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 16:47:53.843332    4804 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 16:47:53.843353    4804 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 16:47:53.930318    4804 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 16:47:54.827289    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:54.827402    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:54.838923    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:47:54.839012    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:54.850947    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:47:54.851033    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:54.862431    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:47:54.862519    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:54.875922    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:47:54.876015    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:54.887569    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:47:54.887655    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:54.900882    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:47:54.900981    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:54.913533    4927 logs.go:282] 0 containers: []
	W1001 16:47:54.913546    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:54.913674    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:54.925643    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:47:54.925664    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:47:54.925669    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:47:54.941335    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:47:54.941347    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:47:54.961339    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:47:54.961353    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:47:54.973192    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:47:54.973206    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:47:54.986499    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:47:54.986514    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:47:55.003113    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:47:55.003135    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:47:55.016808    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:55.016820    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:55.042877    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:55.042902    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:55.080442    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:47:55.080454    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:47:55.119640    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:47:55.119652    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:47:55.133470    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:47:55.133486    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:47:55.147239    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:47:55.147254    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:47:55.163710    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:47:55.163723    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:55.175742    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:55.175754    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:47:55.214468    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:55.214484    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:55.219944    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:47:55.219953    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:47:55.236930    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:47:55.236946    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:47:53.934524    4804 out.go:235]   - Booting up control plane ...
	I1001 16:47:53.934569    4804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 16:47:53.934633    4804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 16:47:53.934680    4804 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 16:47:53.934743    4804 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 16:47:53.936799    4804 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 16:47:58.940449    4804 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.003640 seconds
	I1001 16:47:58.940535    4804 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 16:47:58.945486    4804 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 16:47:59.454119    4804 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 16:47:59.454267    4804 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-193000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 16:47:59.960070    4804 kubeadm.go:310] [bootstrap-token] Using token: eg9n22.z0ark4bzn3ubtph2
	I1001 16:47:59.964649    4804 out.go:235]   - Configuring RBAC rules ...
	I1001 16:47:59.964723    4804 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 16:47:59.964767    4804 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 16:47:59.966818    4804 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 16:47:59.971386    4804 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 16:47:59.972250    4804 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 16:47:59.973127    4804 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 16:47:59.976695    4804 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 16:48:00.145577    4804 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 16:48:00.364088    4804 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 16:48:00.364567    4804 kubeadm.go:310] 
	I1001 16:48:00.364605    4804 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 16:48:00.364608    4804 kubeadm.go:310] 
	I1001 16:48:00.364646    4804 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 16:48:00.364648    4804 kubeadm.go:310] 
	I1001 16:48:00.364661    4804 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 16:48:00.364716    4804 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 16:48:00.364748    4804 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 16:48:00.364787    4804 kubeadm.go:310] 
	I1001 16:48:00.364893    4804 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 16:48:00.364897    4804 kubeadm.go:310] 
	I1001 16:48:00.364960    4804 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 16:48:00.364964    4804 kubeadm.go:310] 
	I1001 16:48:00.365032    4804 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 16:48:00.365072    4804 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 16:48:00.365159    4804 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 16:48:00.365178    4804 kubeadm.go:310] 
	I1001 16:48:00.365276    4804 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 16:48:00.365313    4804 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 16:48:00.365316    4804 kubeadm.go:310] 
	I1001 16:48:00.365367    4804 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token eg9n22.z0ark4bzn3ubtph2 \
	I1001 16:48:00.365430    4804 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7410ba584d1420d22d17a85d1568f395de246b7fddabe3e224321915d0b92005 \
	I1001 16:48:00.365443    4804 kubeadm.go:310] 	--control-plane 
	I1001 16:48:00.365445    4804 kubeadm.go:310] 
	I1001 16:48:00.365486    4804 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 16:48:00.365491    4804 kubeadm.go:310] 
	I1001 16:48:00.365533    4804 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token eg9n22.z0ark4bzn3ubtph2 \
	I1001 16:48:00.365597    4804 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7410ba584d1420d22d17a85d1568f395de246b7fddabe3e224321915d0b92005 
	I1001 16:48:00.365670    4804 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 16:48:00.365679    4804 cni.go:84] Creating CNI manager for ""
	I1001 16:48:00.365686    4804 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:48:00.373343    4804 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 16:48:00.376241    4804 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 16:48:00.379391    4804 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 16:48:00.384284    4804 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 16:48:00.384331    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 16:48:00.384343    4804 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-193000 minikube.k8s.io/updated_at=2024_10_01T16_48_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=running-upgrade-193000 minikube.k8s.io/primary=true
	I1001 16:48:00.426093    4804 kubeadm.go:1113] duration metric: took 41.799667ms to wait for elevateKubeSystemPrivileges
	I1001 16:48:00.426110    4804 ops.go:34] apiserver oom_adj: -16
	I1001 16:48:00.426114    4804 kubeadm.go:394] duration metric: took 4m16.133723667s to StartCluster
	I1001 16:48:00.426123    4804 settings.go:142] acquiring lock: {Name:mkd0df72d236cca9ab7a62ebb6aa022c207aaa93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:48:00.426212    4804 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:48:00.426581    4804 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/kubeconfig: {Name:mk6821adb20f42e2e1842a7c6bcaf1ce77531dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:48:00.426779    4804 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:48:00.426828    4804 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 16:48:00.426861    4804 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-193000"
	I1001 16:48:00.426869    4804 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-193000"
	W1001 16:48:00.426873    4804 addons.go:243] addon storage-provisioner should already be in state true
	I1001 16:48:00.426884    4804 host.go:66] Checking if "running-upgrade-193000" exists ...
	I1001 16:48:00.426883    4804 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-193000"
	I1001 16:48:00.426898    4804 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-193000"
	I1001 16:48:00.427197    4804 config.go:182] Loaded profile config "running-upgrade-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 16:48:00.427929    4804 kapi.go:59] client config for running-upgrade-193000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/running-upgrade-193000/client.key", CAFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101b2e5d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 16:48:00.428048    4804 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-193000"
	W1001 16:48:00.428054    4804 addons.go:243] addon default-storageclass should already be in state true
	I1001 16:48:00.428060    4804 host.go:66] Checking if "running-upgrade-193000" exists ...
	I1001 16:48:00.431217    4804 out.go:177] * Verifying Kubernetes components...
	I1001 16:48:00.431653    4804 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 16:48:00.435452    4804 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 16:48:00.435460    4804 sshutil.go:53] new ssh client: &{IP:localhost Port:50233 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/running-upgrade-193000/id_rsa Username:docker}
	I1001 16:48:00.439246    4804 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:47:57.756639    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:00.442324    4804 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:48:00.449348    4804 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 16:48:00.449355    4804 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 16:48:00.449362    4804 sshutil.go:53] new ssh client: &{IP:localhost Port:50233 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/running-upgrade-193000/id_rsa Username:docker}
	I1001 16:48:00.538799    4804 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 16:48:00.543802    4804 api_server.go:52] waiting for apiserver process to appear ...
	I1001 16:48:00.543854    4804 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:48:00.548305    4804 api_server.go:72] duration metric: took 121.516542ms to wait for apiserver process to appear ...
	I1001 16:48:00.548312    4804 api_server.go:88] waiting for apiserver healthz status ...
	I1001 16:48:00.548320    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:00.555061    4804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 16:48:00.636940    4804 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 16:48:00.901539    4804 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 16:48:00.901555    4804 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 16:48:02.758826    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:02.759036    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:48:02.775953    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:48:02.776056    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:48:02.789032    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:48:02.789124    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:48:02.802925    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:48:02.803011    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:48:02.813266    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:48:02.813357    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:48:02.828211    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:48:02.828302    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:48:02.838396    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:48:02.838471    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:48:02.853635    4927 logs.go:282] 0 containers: []
	W1001 16:48:02.853648    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:48:02.853724    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:48:02.863949    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:48:02.863966    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:48:02.863971    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:48:02.868362    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:48:02.868369    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:48:02.883143    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:48:02.883153    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:48:02.894795    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:48:02.894806    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:48:02.929686    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:48:02.929696    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:48:02.943906    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:48:02.943917    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:48:02.960436    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:48:02.960445    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:48:02.973747    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:48:02.973761    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:48:02.986095    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:48:02.986108    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:48:03.024342    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:48:03.024352    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:48:03.043423    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:48:03.043433    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:48:03.054983    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:48:03.054995    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:48:03.091658    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:48:03.091668    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:48:03.105257    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:48:03.105268    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:48:03.117747    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:48:03.117759    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:48:03.129489    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:48:03.129500    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:48:03.140722    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:48:03.140733    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:48:05.667531    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:05.550355    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:05.550394    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:10.669771    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:10.669905    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:48:10.681615    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:48:10.681703    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:48:10.692189    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:48:10.692274    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:48:10.702934    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:48:10.703013    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:48:10.715499    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:48:10.715580    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:48:10.729393    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:48:10.729475    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:48:10.740008    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:48:10.740086    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:48:10.750422    4927 logs.go:282] 0 containers: []
	W1001 16:48:10.750436    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:48:10.750515    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:48:10.761381    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:48:10.761401    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:48:10.761407    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:48:10.777049    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:48:10.777060    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:48:10.789405    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:48:10.789417    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:48:10.827169    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:48:10.827181    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:48:10.865088    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:48:10.865105    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:48:10.877514    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:48:10.877524    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:48:10.894288    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:48:10.894303    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:48:10.912220    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:48:10.912231    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:48:10.923431    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:48:10.923442    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:48:10.946913    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:48:10.946927    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:48:10.984136    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:48:10.984148    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:48:10.998842    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:48:10.998852    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:48:11.016375    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:48:11.016390    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:48:11.029953    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:48:11.029964    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:48:11.041126    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:48:11.041137    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:48:11.045841    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:48:11.045848    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:48:11.057788    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:48:11.057804    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:48:10.550714    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:10.550758    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:13.572788    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:15.551171    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:15.551212    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:18.574951    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:18.575182    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:48:18.597322    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:48:18.597447    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:48:18.613364    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:48:18.613468    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:48:18.627060    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:48:18.627143    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:48:18.638969    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:48:18.639058    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:48:18.649381    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:48:18.649461    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:48:18.661214    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:48:18.661294    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:48:18.674061    4927 logs.go:282] 0 containers: []
	W1001 16:48:18.674073    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:48:18.674147    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:48:18.685860    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:48:18.685877    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:48:18.685882    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:48:18.725004    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:48:18.725012    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:48:18.739025    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:48:18.739035    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:48:18.756310    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:48:18.756323    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:48:18.769123    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:48:18.769136    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:48:18.779892    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:48:18.779905    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:48:18.791077    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:48:18.791088    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:48:18.808173    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:48:18.808187    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:48:18.832498    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:48:18.832506    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:48:18.837148    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:48:18.837156    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:48:18.876085    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:48:18.876100    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:48:18.893138    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:48:18.893155    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:48:18.904505    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:48:18.904518    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:48:18.916658    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:48:18.916669    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:48:18.958406    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:48:18.958422    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:48:18.973353    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:48:18.973364    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:48:18.989885    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:48:18.989894    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:48:21.503654    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:20.551692    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:20.551749    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:26.505933    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:26.506042    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:48:26.516919    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:48:26.517008    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:48:26.527639    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:48:26.527717    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:48:26.544066    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:48:26.544151    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:48:26.554974    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:48:26.555066    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:48:26.565765    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:48:26.565848    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:48:26.576030    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:48:26.576115    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:48:26.587442    4927 logs.go:282] 0 containers: []
	W1001 16:48:26.587458    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:48:26.587532    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:48:26.598536    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:48:26.598558    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:48:26.598563    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:48:26.612482    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:48:26.612496    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:48:26.650389    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:48:26.650400    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:48:26.671512    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:48:26.671521    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:48:26.685583    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:48:26.685593    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:48:26.708140    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:48:26.708148    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:48:26.719533    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:48:26.719546    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:48:26.732820    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:48:26.732834    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:48:26.743619    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:48:26.743632    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:48:26.780567    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:48:26.780577    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:48:26.798429    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:48:26.798439    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:48:26.812842    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:48:26.812854    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:48:26.825040    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:48:26.825051    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:48:26.841646    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:48:26.841659    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:48:26.853937    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:48:26.853948    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:48:26.865278    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:48:26.865290    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:48:25.552476    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:25.552516    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:30.553316    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:30.553350    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1001 16:48:30.903924    4804 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1001 16:48:30.907524    4804 out.go:177] * Enabled addons: storage-provisioner
	I1001 16:48:26.901241    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:48:26.901249    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:48:29.406105    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:30.919315    4804 addons.go:510] duration metric: took 30.49280275s for enable addons: enabled=[storage-provisioner]
	I1001 16:48:34.408721    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:34.408997    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:48:34.429020    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:48:34.429132    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:48:34.443127    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:48:34.443214    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:48:34.455486    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:48:34.455574    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:48:34.466391    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:48:34.466473    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:48:34.476776    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:48:34.476863    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:48:34.487091    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:48:34.487177    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:48:34.497338    4927 logs.go:282] 0 containers: []
	W1001 16:48:34.497352    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:48:34.497425    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:48:34.507735    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:48:34.507777    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:48:34.507783    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:48:34.522883    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:48:34.522898    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:48:34.540644    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:48:34.540654    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:48:34.552350    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:48:34.552359    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:48:34.564567    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:48:34.564579    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:48:34.568800    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:48:34.568805    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:48:34.580014    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:48:34.580025    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:48:34.616037    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:48:34.616045    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:48:34.631048    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:48:34.631063    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:48:34.642446    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:48:34.642456    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:48:34.659000    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:48:34.659010    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:48:34.670829    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:48:34.670843    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:48:34.694126    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:48:34.694136    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:48:34.730079    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:48:34.730092    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:48:34.744515    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:48:34.744527    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:48:34.781989    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:48:34.782003    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:48:34.793907    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:48:34.793918    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:48:35.554414    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:35.554452    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:37.308770    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:40.555771    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:40.555813    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:42.311319    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:42.311505    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:48:42.325454    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:48:42.325551    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:48:42.336099    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:48:42.336193    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:48:42.346278    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:48:42.346355    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:48:42.357231    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:48:42.357319    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:48:42.367722    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:48:42.367804    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:48:42.378352    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:48:42.378430    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:48:42.388928    4927 logs.go:282] 0 containers: []
	W1001 16:48:42.388938    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:48:42.389012    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:48:42.399036    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:48:42.399054    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:48:42.399060    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:48:42.410497    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:48:42.410509    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:48:42.450729    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:48:42.450740    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:48:42.462362    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:48:42.462374    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:48:42.474958    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:48:42.474972    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:48:42.499272    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:48:42.499285    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:48:42.517246    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:48:42.517257    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:48:42.533595    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:48:42.533606    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:48:42.548285    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:48:42.548297    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:48:42.561649    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:48:42.561660    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:48:42.576362    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:48:42.576375    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:48:42.587807    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:48:42.587820    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:48:42.605663    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:48:42.605672    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:48:42.612036    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:48:42.612046    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:48:42.652831    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:48:42.652842    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:48:42.690570    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:48:42.690582    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:48:42.703084    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:48:42.703098    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:48:45.220475    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:45.557774    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:45.557814    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:50.222920    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:50.223138    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:48:50.244084    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:48:50.244190    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:48:50.256562    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:48:50.256655    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:48:50.267244    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:48:50.267330    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:48:50.278656    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:48:50.278742    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:48:50.289913    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:48:50.290000    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:48:50.301244    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:48:50.301327    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:48:50.311444    4927 logs.go:282] 0 containers: []
	W1001 16:48:50.311455    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:48:50.311523    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:48:50.322350    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:48:50.322371    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:48:50.322376    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:48:50.359786    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:48:50.359798    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:48:50.378916    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:48:50.378931    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:48:50.392051    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:48:50.392067    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:48:50.405146    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:48:50.405157    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:48:50.417057    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:48:50.417069    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:48:50.433373    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:48:50.433387    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:48:50.445494    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:48:50.445512    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:48:50.450141    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:48:50.450147    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:48:50.491843    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:48:50.491858    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:48:50.506466    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:48:50.506483    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:48:50.527380    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:48:50.527391    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:48:50.540227    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:48:50.540240    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:48:50.564238    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:48:50.564245    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:48:50.603400    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:48:50.603408    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:48:50.621567    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:48:50.621585    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:48:50.638985    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:48:50.639000    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:48:50.560015    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:50.560032    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:53.150714    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:55.562177    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:55.562224    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:58.153062    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:58.153327    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:48:58.177168    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:48:58.177290    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:48:58.193409    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:48:58.193507    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:48:58.205474    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:48:58.205565    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:48:58.218708    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:48:58.218792    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:48:58.229498    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:48:58.229583    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:48:58.240517    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:48:58.240599    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:48:58.250288    4927 logs.go:282] 0 containers: []
	W1001 16:48:58.250297    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:48:58.250359    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:48:58.264882    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:48:58.264901    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:48:58.264906    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:48:58.277484    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:48:58.277495    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:48:58.294873    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:48:58.294884    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:48:58.306330    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:48:58.306341    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:48:58.329499    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:48:58.329509    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:48:58.333924    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:48:58.333933    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:48:58.348719    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:48:58.348729    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:48:58.360085    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:48:58.360097    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:48:58.375673    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:48:58.375685    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:48:58.387554    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:48:58.387566    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:48:58.422614    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:48:58.422626    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:48:58.461719    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:48:58.461729    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:48:58.481220    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:48:58.481235    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:48:58.492427    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:48:58.492438    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:48:58.506522    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:48:58.506533    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:48:58.518497    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:48:58.518510    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:48:58.557197    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:48:58.557204    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:49:01.073024    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:00.563164    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:00.563347    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:00.581874    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:49:00.581953    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:00.592169    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:49:00.592253    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:00.609812    4804 logs.go:282] 2 containers: [4e2b1026af64 52703530d033]
	I1001 16:49:00.609894    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:00.623840    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:49:00.623916    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:00.639064    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:49:00.639148    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:00.649648    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:49:00.649718    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:00.659541    4804 logs.go:282] 0 containers: []
	W1001 16:49:00.659552    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:00.659622    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:00.669815    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:49:00.669829    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:00.669834    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:00.706101    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:49:00.706116    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:49:00.719834    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:49:00.719847    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:49:00.731612    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:49:00.731625    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:49:00.742925    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:49:00.742937    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:49:00.764877    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:49:00.764886    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:49:00.775632    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:49:00.775643    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:00.787059    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:00.787069    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:49:00.804538    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:49:00.804630    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:49:00.820810    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:00.820816    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:00.825398    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:49:00.825408    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:49:00.839010    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:49:00.839019    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:49:00.850317    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:49:00.850328    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:49:00.868319    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:00.868328    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:00.893462    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:49:00.893475    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:49:00.893500    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:49:00.893505    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:49:00.893508    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:49:00.893511    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:49:00.893514    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:49:06.075475    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:06.075966    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:06.109803    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:49:06.109972    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:06.130509    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:49:06.130635    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:06.145551    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:49:06.145649    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:06.158778    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:49:06.158865    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:06.170415    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:49:06.170493    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:06.181059    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:49:06.181142    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:06.191653    4927 logs.go:282] 0 containers: []
	W1001 16:49:06.191666    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:06.191741    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:06.205605    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:49:06.205628    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:06.205634    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:06.210659    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:49:06.210666    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:49:06.244327    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:49:06.244345    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:49:06.271485    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:49:06.271501    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:49:06.288051    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:06.288067    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:49:06.325754    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:49:06.325762    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:06.337418    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:06.337431    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:06.375080    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:49:06.375092    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:49:06.388642    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:49:06.388658    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:49:06.402087    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:06.402100    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:06.426305    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:49:06.426318    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:49:06.443926    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:49:06.443937    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:49:06.456506    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:49:06.456518    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:49:06.474863    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:49:06.474875    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:49:06.512095    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:49:06.512107    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:49:06.526761    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:49:06.526775    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:49:06.542940    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:49:06.542953    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:49:09.057128    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:10.897189    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:14.059675    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:14.059921    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:14.080887    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:49:14.081000    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:14.095371    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:49:14.095456    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:14.108063    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:49:14.108142    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:14.118393    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:49:14.118464    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:14.128710    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:49:14.128792    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:14.139281    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:49:14.139357    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:14.149466    4927 logs.go:282] 0 containers: []
	W1001 16:49:14.149476    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:14.149542    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:14.159775    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:49:14.159793    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:14.159798    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:14.164072    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:49:14.164080    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:49:14.177801    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:49:14.177815    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:49:14.191956    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:49:14.191967    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:49:14.207828    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:49:14.207843    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:49:14.229549    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:14.229561    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:49:14.267925    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:14.267934    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:14.310413    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:49:14.310424    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:49:14.329270    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:49:14.329286    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:49:14.345209    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:49:14.345225    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:49:14.356707    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:14.356722    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:14.378127    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:49:14.378134    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:49:14.416521    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:49:14.416534    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:49:14.428310    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:49:14.428323    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:49:14.446596    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:49:14.446611    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:14.458660    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:49:14.458668    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:49:14.471724    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:49:14.471733    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:49:15.899926    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:15.900375    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:15.932052    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:49:15.932214    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:15.951142    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:49:15.951242    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:15.965370    4804 logs.go:282] 2 containers: [4e2b1026af64 52703530d033]
	I1001 16:49:15.965462    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:15.981719    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:49:15.981801    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:15.992821    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:49:15.992910    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:16.003676    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:49:16.003761    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:16.014404    4804 logs.go:282] 0 containers: []
	W1001 16:49:16.014416    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:16.014489    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:16.025040    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:49:16.025054    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:16.025060    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:49:16.041751    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:49:16.041843    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:49:16.058436    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:16.058441    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:16.062993    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:16.063002    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:16.104754    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:49:16.104765    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:49:16.116737    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:49:16.116747    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:49:16.137189    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:49:16.137199    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:49:16.152347    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:49:16.152356    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:49:16.166147    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:49:16.166155    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:49:16.177741    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:49:16.177752    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:49:16.188951    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:49:16.188961    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:49:16.206144    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:49:16.206154    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:49:16.217739    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:16.217755    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:16.241366    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:49:16.241376    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:16.256181    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:49:16.256190    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:49:16.256215    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:49:16.256220    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:49:16.256223    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:49:16.256227    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:49:16.256231    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:49:16.984883    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:21.987242    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:21.987564    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:22.018285    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:49:22.018432    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:22.037504    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:49:22.037611    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:22.051566    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:49:22.051665    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:22.062762    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:49:22.062853    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:22.073222    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:49:22.073308    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:22.083993    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:49:22.084076    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:22.094486    4927 logs.go:282] 0 containers: []
	W1001 16:49:22.094505    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:22.094573    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:22.105791    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:49:22.105807    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:49:22.105813    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:49:22.117409    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:22.117424    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:22.141020    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:49:22.141031    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:22.152738    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:22.152750    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:22.186954    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:49:22.186969    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:49:22.201573    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:49:22.201586    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:49:22.213876    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:49:22.213887    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:49:22.231594    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:49:22.231613    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:49:22.246804    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:49:22.246818    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:49:22.258167    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:49:22.258184    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:49:22.269498    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:49:22.269512    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:49:22.282528    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:22.282542    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:49:22.320373    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:49:22.320387    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:49:22.358582    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:49:22.358594    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:49:22.373432    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:49:22.373448    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:49:22.391132    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:22.391141    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:22.395385    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:49:22.395393    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:49:24.912109    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:26.260270    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:29.914503    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:29.914986    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:29.950324    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:49:29.950478    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:29.969666    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:49:29.969788    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:29.983541    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:49:29.983630    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:29.995336    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:49:29.995432    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:30.006030    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:49:30.006122    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:30.017329    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:49:30.017415    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:30.027354    4927 logs.go:282] 0 containers: []
	W1001 16:49:30.027368    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:30.027436    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:30.038263    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:49:30.038282    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:30.038288    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:49:30.075283    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:49:30.075297    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:49:30.093162    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:49:30.093173    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:49:30.107180    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:49:30.107192    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:49:30.119286    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:49:30.119297    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:49:30.132333    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:30.132347    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:30.154200    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:49:30.154210    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:30.165964    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:30.165974    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:30.170712    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:49:30.170720    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:49:30.209632    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:49:30.209647    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:49:30.223102    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:49:30.223116    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:49:30.241809    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:49:30.241822    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:49:30.258217    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:49:30.258228    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:49:30.269631    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:30.269644    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:30.304503    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:49:30.304518    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:49:30.319770    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:49:30.319787    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:49:30.331240    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:49:30.331252    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:49:31.262593    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:31.262835    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:31.297698    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:49:31.297843    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:31.314223    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:49:31.314317    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:31.327049    4804 logs.go:282] 2 containers: [4e2b1026af64 52703530d033]
	I1001 16:49:31.327139    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:31.339423    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:49:31.339503    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:31.350592    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:49:31.350668    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:31.361444    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:49:31.361531    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:31.372168    4804 logs.go:282] 0 containers: []
	W1001 16:49:31.372179    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:31.372247    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:31.382439    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:49:31.382453    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:31.382459    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:31.387431    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:49:31.387438    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:49:31.402494    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:49:31.402505    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:49:31.416714    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:49:31.416725    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:49:31.436011    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:49:31.436025    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:49:31.448595    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:31.448606    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:31.473190    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:49:31.473198    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:31.484415    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:31.484426    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:49:31.502825    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:49:31.502921    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:49:31.519547    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:31.519553    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:31.591812    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:49:31.591827    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:49:31.607064    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:49:31.607074    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:49:31.621233    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:49:31.621248    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:49:31.632520    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:49:31.632535    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:49:31.644066    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:49:31.644080    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:49:31.644106    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:49:31.644111    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:49:31.644116    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:49:31.644120    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:49:31.644123    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:49:32.845153    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:37.847502    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:37.847783    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:37.867465    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:49:37.867586    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:37.881710    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:49:37.881810    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:37.895338    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:49:37.895427    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:37.906449    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:49:37.906536    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:37.922838    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:49:37.922919    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:37.933581    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:49:37.933672    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:37.944096    4927 logs.go:282] 0 containers: []
	W1001 16:49:37.944109    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:37.944187    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:37.956003    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:49:37.956022    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:49:37.956028    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:49:37.973381    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:49:37.973394    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:37.985722    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:37.985733    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:38.020854    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:49:38.020866    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:49:38.034777    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:49:38.034788    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:49:38.073434    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:49:38.073445    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:49:38.092863    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:49:38.092874    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:49:38.105142    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:38.105154    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:49:38.142001    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:49:38.142012    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:49:38.153970    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:49:38.153981    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:49:38.170225    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:49:38.170239    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:49:38.183076    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:49:38.183086    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:49:38.197151    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:38.197161    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:38.220562    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:38.220572    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:38.224567    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:49:38.224576    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:49:38.238518    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:49:38.238534    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:49:38.253360    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:49:38.253373    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:49:40.767082    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:41.648186    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:45.769347    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:45.769499    4927 kubeadm.go:597] duration metric: took 4m4.435217583s to restartPrimaryControlPlane
	W1001 16:49:45.769652    4927 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 16:49:45.769714    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1001 16:49:46.863310    4927 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.093588541s)
	I1001 16:49:46.863373    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 16:49:46.650468    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:46.650580    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:46.662018    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:49:46.662098    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:46.674159    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:49:46.674243    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:46.689542    4804 logs.go:282] 2 containers: [4e2b1026af64 52703530d033]
	I1001 16:49:46.689630    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:46.701563    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:49:46.701646    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:46.713193    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:49:46.713280    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:46.724719    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:49:46.724802    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:46.735750    4804 logs.go:282] 0 containers: []
	W1001 16:49:46.735765    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:46.735846    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:46.747964    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:49:46.747985    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:49:46.747992    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:49:46.764535    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:49:46.764553    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:49:46.776948    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:49:46.776960    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:49:46.789159    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:46.789170    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:46.815217    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:49:46.815232    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:46.827759    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:46.827772    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:49:46.847039    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:49:46.847135    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:49:46.864249    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:46.864257    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:46.904147    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:49:46.904159    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:49:46.919851    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:49:46.919866    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:49:46.936176    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:49:46.936191    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:49:46.948706    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:49:46.948718    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:49:46.966831    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:49:46.966848    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:49:46.979342    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:46.979354    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:46.985082    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:49:46.985093    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:49:46.985119    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:49:46.985125    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:49:46.985129    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:49:46.985133    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:49:46.985135    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:49:46.868576    4927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 16:49:46.871868    4927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 16:49:46.875042    4927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 16:49:46.875049    4927 kubeadm.go:157] found existing configuration files:
	
	I1001 16:49:46.875089    4927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/admin.conf
	I1001 16:49:46.877896    4927 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 16:49:46.877951    4927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 16:49:46.881211    4927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/kubelet.conf
	I1001 16:49:46.884096    4927 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 16:49:46.884131    4927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 16:49:46.886921    4927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/controller-manager.conf
	I1001 16:49:46.889921    4927 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 16:49:46.889969    4927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 16:49:46.893215    4927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/scheduler.conf
	I1001 16:49:46.896205    4927 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 16:49:46.896251    4927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 16:49:46.899107    4927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 16:49:46.980874    4927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 16:49:53.516864    4927 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1001 16:49:53.516893    4927 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 16:49:53.516928    4927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 16:49:53.516973    4927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 16:49:53.517023    4927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 16:49:53.517055    4927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 16:49:53.520148    4927 out.go:235]   - Generating certificates and keys ...
	I1001 16:49:53.520188    4927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 16:49:53.520223    4927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 16:49:53.520268    4927 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 16:49:53.520305    4927 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 16:49:53.520354    4927 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 16:49:53.520381    4927 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 16:49:53.520413    4927 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 16:49:53.520454    4927 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 16:49:53.520499    4927 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 16:49:53.520538    4927 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 16:49:53.520561    4927 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 16:49:53.520591    4927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 16:49:53.520624    4927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 16:49:53.520656    4927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 16:49:53.520686    4927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 16:49:53.520711    4927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 16:49:53.520759    4927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 16:49:53.520799    4927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 16:49:53.520823    4927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 16:49:53.520863    4927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 16:49:53.531184    4927 out.go:235]   - Booting up control plane ...
	I1001 16:49:53.531229    4927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 16:49:53.531266    4927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 16:49:53.531317    4927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 16:49:53.531365    4927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 16:49:53.531450    4927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 16:49:53.531501    4927 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503811 seconds
	I1001 16:49:53.531564    4927 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 16:49:53.531627    4927 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 16:49:53.531658    4927 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 16:49:53.531755    4927 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-342000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 16:49:53.531786    4927 kubeadm.go:310] [bootstrap-token] Using token: f5f5sl.u9431kvc7hveohtv
	I1001 16:49:53.535254    4927 out.go:235]   - Configuring RBAC rules ...
	I1001 16:49:53.535306    4927 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 16:49:53.535352    4927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 16:49:53.535427    4927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 16:49:53.535500    4927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 16:49:53.535563    4927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 16:49:53.535608    4927 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 16:49:53.535671    4927 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 16:49:53.535701    4927 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 16:49:53.535734    4927 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 16:49:53.535738    4927 kubeadm.go:310] 
	I1001 16:49:53.535770    4927 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 16:49:53.535775    4927 kubeadm.go:310] 
	I1001 16:49:53.535814    4927 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 16:49:53.535817    4927 kubeadm.go:310] 
	I1001 16:49:53.535831    4927 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 16:49:53.535867    4927 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 16:49:53.535895    4927 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 16:49:53.535899    4927 kubeadm.go:310] 
	I1001 16:49:53.535926    4927 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 16:49:53.535929    4927 kubeadm.go:310] 
	I1001 16:49:53.535963    4927 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 16:49:53.535967    4927 kubeadm.go:310] 
	I1001 16:49:53.535993    4927 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 16:49:53.536038    4927 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 16:49:53.536080    4927 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 16:49:53.536085    4927 kubeadm.go:310] 
	I1001 16:49:53.536131    4927 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 16:49:53.536171    4927 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 16:49:53.536174    4927 kubeadm.go:310] 
	I1001 16:49:53.536216    4927 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f5f5sl.u9431kvc7hveohtv \
	I1001 16:49:53.536274    4927 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7410ba584d1420d22d17a85d1568f395de246b7fddabe3e224321915d0b92005 \
	I1001 16:49:53.536287    4927 kubeadm.go:310] 	--control-plane 
	I1001 16:49:53.536291    4927 kubeadm.go:310] 
	I1001 16:49:53.536342    4927 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 16:49:53.536346    4927 kubeadm.go:310] 
	I1001 16:49:53.536390    4927 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f5f5sl.u9431kvc7hveohtv \
	I1001 16:49:53.536444    4927 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7410ba584d1420d22d17a85d1568f395de246b7fddabe3e224321915d0b92005 
	I1001 16:49:53.536450    4927 cni.go:84] Creating CNI manager for ""
	I1001 16:49:53.536458    4927 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:49:53.546139    4927 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 16:49:53.550217    4927 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 16:49:53.553359    4927 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 16:49:53.558084    4927 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 16:49:53.558131    4927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 16:49:53.558153    4927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-342000 minikube.k8s.io/updated_at=2024_10_01T16_49_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=stopped-upgrade-342000 minikube.k8s.io/primary=true
	I1001 16:49:53.602884    4927 ops.go:34] apiserver oom_adj: -16
	I1001 16:49:53.602881    4927 kubeadm.go:1113] duration metric: took 44.785833ms to wait for elevateKubeSystemPrivileges
	I1001 16:49:53.602902    4927 kubeadm.go:394] duration metric: took 4m12.283040709s to StartCluster
	I1001 16:49:53.602913    4927 settings.go:142] acquiring lock: {Name:mkd0df72d236cca9ab7a62ebb6aa022c207aaa93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:49:53.603005    4927 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:49:53.603434    4927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/kubeconfig: {Name:mk6821adb20f42e2e1842a7c6bcaf1ce77531dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:49:53.603642    4927 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:49:53.603734    4927 config.go:182] Loaded profile config "stopped-upgrade-342000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 16:49:53.603682    4927 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 16:49:53.603771    4927 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-342000"
	I1001 16:49:53.603780    4927 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-342000"
	W1001 16:49:53.603785    4927 addons.go:243] addon storage-provisioner should already be in state true
	I1001 16:49:53.603781    4927 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-342000"
	I1001 16:49:53.603798    4927 host.go:66] Checking if "stopped-upgrade-342000" exists ...
	I1001 16:49:53.603802    4927 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-342000"
	I1001 16:49:53.604734    4927 kapi.go:59] client config for stopped-upgrade-342000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/client.key", CAFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10453e5d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 16:49:53.604855    4927 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-342000"
	W1001 16:49:53.604859    4927 addons.go:243] addon default-storageclass should already be in state true
	I1001 16:49:53.604866    4927 host.go:66] Checking if "stopped-upgrade-342000" exists ...
	I1001 16:49:53.607156    4927 out.go:177] * Verifying Kubernetes components...
	I1001 16:49:53.607495    4927 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 16:49:53.611398    4927 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 16:49:53.611406    4927 sshutil.go:53] new ssh client: &{IP:localhost Port:50486 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/id_rsa Username:docker}
	I1001 16:49:53.615140    4927 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:49:53.618237    4927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:49:53.622244    4927 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 16:49:53.622250    4927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 16:49:53.622255    4927 sshutil.go:53] new ssh client: &{IP:localhost Port:50486 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/id_rsa Username:docker}
	I1001 16:49:53.708369    4927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 16:49:53.713457    4927 api_server.go:52] waiting for apiserver process to appear ...
	I1001 16:49:53.713510    4927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:49:53.716965    4927 api_server.go:72] duration metric: took 113.311208ms to wait for apiserver process to appear ...
	I1001 16:49:53.716973    4927 api_server.go:88] waiting for apiserver healthz status ...
	I1001 16:49:53.716980    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:53.729975    4927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 16:49:53.797396    4927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 16:49:54.081190    4927 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 16:49:54.081203    4927 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 16:49:56.989163    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:58.719009    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:58.719038    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:01.991423    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:01.991541    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:50:02.002981    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:50:02.003067    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:50:02.013114    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:50:02.013196    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:50:02.023560    4804 logs.go:282] 2 containers: [4e2b1026af64 52703530d033]
	I1001 16:50:02.023640    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:50:02.033826    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:50:02.033908    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:50:02.049816    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:50:02.049905    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:50:02.060113    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:50:02.060195    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:50:02.070834    4804 logs.go:282] 0 containers: []
	W1001 16:50:02.070845    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:50:02.070921    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:50:02.082622    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:50:02.082636    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:50:02.082642    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:50:02.096777    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:50:02.096787    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:50:02.113383    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:50:02.113394    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:50:02.125335    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:50:02.125346    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:50:02.160457    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:50:02.160467    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:50:02.165065    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:50:02.165074    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:50:02.179531    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:50:02.179542    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:50:02.195154    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:50:02.195163    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:50:02.207129    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:50:02.207138    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:50:02.225160    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:50:02.225172    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:50:02.237203    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:50:02.237213    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:50:02.260461    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:50:02.260470    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:50:02.277697    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:50:02.277792    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:50:02.294428    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:50:02.294435    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:50:02.306921    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:50:02.306931    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:50:02.306958    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:50:02.306963    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:50:02.306966    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:50:02.306969    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:50:02.306972    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:50:03.719207    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:03.719229    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:08.719474    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:08.719502    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:12.311007    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:13.720044    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:13.720067    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:17.313299    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:17.313811    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:50:17.351818    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:50:17.351986    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:50:17.372932    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:50:17.373058    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:50:17.388587    4804 logs.go:282] 4 containers: [2a9fdf492bbf 50b4f2e786a4 4e2b1026af64 52703530d033]
	I1001 16:50:17.388686    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:50:17.400357    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:50:17.400436    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:50:17.411276    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:50:17.411350    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:50:17.426127    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:50:17.426225    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:50:17.436586    4804 logs.go:282] 0 containers: []
	W1001 16:50:17.436598    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:50:17.436673    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:50:17.447829    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:50:17.447846    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:50:17.447852    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:50:17.463361    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:50:17.463375    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:50:17.475895    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:50:17.475905    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:50:17.487639    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:50:17.487651    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:50:17.504124    4804 logs.go:123] Gathering logs for coredns [50b4f2e786a4] ...
	I1001 16:50:17.504146    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50b4f2e786a4"
	I1001 16:50:17.517438    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:50:17.517450    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:50:17.530595    4804 logs.go:123] Gathering logs for coredns [2a9fdf492bbf] ...
	I1001 16:50:17.530606    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a9fdf492bbf"
	I1001 16:50:17.541964    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:50:17.541976    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:50:17.547220    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:50:17.547228    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:50:17.583152    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:50:17.583163    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:50:17.598286    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:50:17.598297    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:50:17.615102    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:50:17.615198    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:50:17.631900    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:50:17.631906    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:50:17.649295    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:50:17.649304    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:50:17.660792    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:50:17.660805    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:50:17.672697    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:50:17.672711    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:50:17.697425    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:50:17.697432    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:50:17.697456    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:50:17.697460    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:50:17.697463    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:50:17.697481    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:50:17.697484    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:50:18.720557    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:18.720614    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:23.721288    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:23.721340    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1001 16:50:24.082733    4927 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1001 16:50:24.087046    4927 out.go:177] * Enabled addons: storage-provisioner
	I1001 16:50:24.094944    4927 addons.go:510] duration metric: took 30.491580125s for enable addons: enabled=[storage-provisioner]
	I1001 16:50:27.701480    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:28.722207    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:28.722228    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:32.703812    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:32.704338    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:50:32.738950    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:50:32.739112    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:50:32.764008    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:50:32.764115    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:50:32.777895    4804 logs.go:282] 4 containers: [2a9fdf492bbf 50b4f2e786a4 4e2b1026af64 52703530d033]
	I1001 16:50:32.777996    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:50:32.790414    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:50:32.790502    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:50:32.801841    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:50:32.801925    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:50:32.813046    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:50:32.813132    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:50:32.823248    4804 logs.go:282] 0 containers: []
	W1001 16:50:32.823266    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:50:32.823335    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:50:32.834156    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:50:32.834175    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:50:32.834180    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:50:32.853166    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:50:32.853257    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:50:33.723275    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:33.723316    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:32.869420    4804 logs.go:123] Gathering logs for coredns [2a9fdf492bbf] ...
	I1001 16:50:32.869429    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a9fdf492bbf"
	I1001 16:50:32.880840    4804 logs.go:123] Gathering logs for coredns [50b4f2e786a4] ...
	I1001 16:50:32.880849    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50b4f2e786a4"
	I1001 16:50:32.892618    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:50:32.892628    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:50:32.904211    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:50:32.904220    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:50:32.918582    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:50:32.918591    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:50:32.930277    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:50:32.930287    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:50:32.947687    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:50:32.947697    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:50:32.959756    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:50:32.959769    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:50:32.974376    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:50:32.974386    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:50:32.985849    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:50:32.985860    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:50:33.001160    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:50:33.001169    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:50:33.005824    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:50:33.005831    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:50:33.042267    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:50:33.042281    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:50:33.068362    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:50:33.068374    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:50:33.093781    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:50:33.093790    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:50:33.093814    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:50:33.093818    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:50:33.093831    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:50:33.093835    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:50:33.093840    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:50:38.724738    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:38.724786    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:43.726569    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:43.726609    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:43.097320    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:48.726878    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:48.726902    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:48.099629    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:48.099860    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:50:48.114717    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:50:48.114817    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:50:48.126197    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:50:48.126285    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:50:48.137811    4804 logs.go:282] 4 containers: [2a9fdf492bbf 50b4f2e786a4 4e2b1026af64 52703530d033]
	I1001 16:50:48.137899    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:50:48.153284    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:50:48.153367    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:50:48.164088    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:50:48.164168    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:50:48.174621    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:50:48.174692    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:50:48.185704    4804 logs.go:282] 0 containers: []
	W1001 16:50:48.185717    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:50:48.185787    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:50:48.196180    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:50:48.196195    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:50:48.196200    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:50:48.207629    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:50:48.207639    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:50:48.219540    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:50:48.219550    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:50:48.232462    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:50:48.232474    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:50:48.250059    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:50:48.250069    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:50:48.264975    4804 logs.go:123] Gathering logs for coredns [2a9fdf492bbf] ...
	I1001 16:50:48.264986    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a9fdf492bbf"
	I1001 16:50:48.295856    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:50:48.295865    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:50:48.322950    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:50:48.322961    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:50:48.327636    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:50:48.327645    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:50:48.342392    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:50:48.342402    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:50:48.356328    4804 logs.go:123] Gathering logs for coredns [50b4f2e786a4] ...
	I1001 16:50:48.356338    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50b4f2e786a4"
	I1001 16:50:48.367665    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:50:48.367675    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:50:48.380558    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:50:48.380571    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:50:48.391870    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:50:48.391881    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:50:48.408668    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:50:48.408759    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:50:48.425658    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:50:48.425673    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:50:48.461883    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:50:48.461893    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:50:48.461923    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:50:48.461928    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:50:48.461934    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:50:48.461938    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:50:48.461941    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:50:53.729058    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:53.729228    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:50:53.740826    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:50:53.740902    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:50:53.755917    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:50:53.755990    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:50:53.766618    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:50:53.766705    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:50:53.777229    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:50:53.777311    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:50:53.787952    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:50:53.788035    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:50:53.798129    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:50:53.798212    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:50:53.807839    4927 logs.go:282] 0 containers: []
	W1001 16:50:53.807853    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:50:53.807923    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:50:53.818203    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:50:53.818221    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:50:53.818227    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:50:53.822934    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:50:53.822941    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:50:53.856624    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:50:53.856640    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:50:53.876357    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:50:53.876372    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:50:53.900582    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:50:53.900591    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:50:53.935276    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:50:53.935286    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:50:53.950041    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:50:53.950052    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:50:53.964180    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:50:53.964190    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:50:53.975900    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:50:53.975916    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:50:53.987265    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:50:53.987276    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:50:54.001727    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:50:54.001736    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:50:54.013597    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:50:54.013609    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:50:54.030596    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:50:54.030606    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:50:56.545188    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:01.547483    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:01.547655    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:01.558992    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:51:01.559075    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:01.569626    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:51:01.569710    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:01.580500    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:51:01.580585    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:01.590403    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:51:01.590484    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:01.602135    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:51:01.602232    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:01.612346    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:51:01.612427    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:01.622087    4927 logs.go:282] 0 containers: []
	W1001 16:51:01.622098    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:01.622166    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:01.633393    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:51:01.633408    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:01.633414    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:01.659343    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:01.659354    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:51:01.694652    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:01.694671    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:01.734330    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:51:01.734346    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:51:01.749887    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:51:01.749903    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:51:01.762025    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:51:01.762040    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:51:01.776749    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:51:01.776761    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:51:01.788543    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:51:01.788555    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:01.801105    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:01.801121    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:01.805743    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:51:01.805750    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:51:01.819548    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:51:01.819563    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:51:01.831018    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:51:01.831029    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:51:01.842800    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:51:01.842810    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:50:58.465967    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:04.361753    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:03.468181    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:03.468310    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:03.483154    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:51:03.483246    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:03.494880    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:51:03.494967    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:03.505505    4804 logs.go:282] 4 containers: [2a9fdf492bbf 50b4f2e786a4 4e2b1026af64 52703530d033]
	I1001 16:51:03.505592    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:03.515929    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:51:03.516012    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:03.527668    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:51:03.527739    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:03.538705    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:51:03.538779    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:03.549311    4804 logs.go:282] 0 containers: []
	W1001 16:51:03.549323    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:03.549388    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:03.559793    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:51:03.559809    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:51:03.559814    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:51:03.574308    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:51:03.574319    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:51:03.585814    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:51:03.585824    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:51:03.606365    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:51:03.606374    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:03.618468    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:03.618479    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:51:03.637633    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:51:03.637725    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:51:03.654071    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:51:03.654079    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:51:03.665651    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:51:03.665661    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:51:03.683093    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:51:03.683104    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:51:03.694395    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:03.694413    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:03.729771    4804 logs.go:123] Gathering logs for coredns [2a9fdf492bbf] ...
	I1001 16:51:03.729782    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a9fdf492bbf"
	I1001 16:51:03.745381    4804 logs.go:123] Gathering logs for coredns [50b4f2e786a4] ...
	I1001 16:51:03.745391    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50b4f2e786a4"
	I1001 16:51:03.757214    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:03.757229    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:03.762072    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:51:03.762079    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:51:03.776549    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:51:03.776559    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:51:03.788620    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:03.788633    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:03.813295    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:51:03.813303    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:51:03.813332    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:51:03.813337    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:51:03.813355    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:51:03.813362    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:51:03.813366    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:51:09.364093    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:09.364498    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:09.399724    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:51:09.399875    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:09.417856    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:51:09.417972    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:09.438587    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:51:09.438687    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:09.449749    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:51:09.449831    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:09.460795    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:51:09.460878    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:09.471470    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:51:09.471541    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:09.481364    4927 logs.go:282] 0 containers: []
	W1001 16:51:09.481382    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:09.481438    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:09.491992    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:51:09.492007    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:51:09.492013    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:51:09.510610    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:51:09.510621    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:51:09.522538    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:51:09.522551    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:51:09.537189    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:51:09.537201    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:51:09.549203    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:51:09.549215    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:09.561971    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:09.561982    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:51:09.598259    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:09.598269    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:09.602822    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:51:09.602832    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:51:09.617266    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:51:09.617281    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:51:09.629191    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:51:09.629209    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:51:09.646167    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:09.646182    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:09.669788    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:09.669797    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:09.703982    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:51:09.703994    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:51:12.218712    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:13.817220    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:17.219450    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:17.219637    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:17.231567    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:51:17.231650    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:17.242136    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:51:17.242220    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:17.253280    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:51:17.253359    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:17.263490    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:51:17.263570    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:17.273903    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:51:17.273990    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:17.284716    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:51:17.284800    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:17.294598    4927 logs.go:282] 0 containers: []
	W1001 16:51:17.294611    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:17.294683    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:17.304958    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:51:17.304976    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:17.304981    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:17.309626    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:51:17.309632    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:51:17.323259    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:51:17.323270    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:51:17.335122    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:51:17.335133    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:51:17.347475    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:17.347485    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:17.371090    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:17.371103    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:51:17.405848    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:17.405858    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:17.442021    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:51:17.442037    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:51:17.460231    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:51:17.460243    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:51:17.472138    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:51:17.472152    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:51:17.487219    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:51:17.487229    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:51:17.504181    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:51:17.504194    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:51:17.517088    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:51:17.517104    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:20.030781    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:18.819446    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:18.819616    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:18.834491    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:51:18.834590    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:18.847948    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:51:18.848035    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:18.858854    4804 logs.go:282] 4 containers: [2a9fdf492bbf 50b4f2e786a4 4e2b1026af64 52703530d033]
	I1001 16:51:18.858945    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:18.870061    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:51:18.870141    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:18.881100    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:51:18.881184    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:18.891788    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:51:18.891872    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:18.902456    4804 logs.go:282] 0 containers: []
	W1001 16:51:18.902469    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:18.902535    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:18.913334    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:51:18.913350    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:51:18.913356    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:51:18.928814    4804 logs.go:123] Gathering logs for coredns [2a9fdf492bbf] ...
	I1001 16:51:18.928826    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a9fdf492bbf"
	I1001 16:51:18.940940    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:51:18.940953    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:51:18.953456    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:51:18.953470    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:51:18.975234    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:18.975245    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:51:18.994350    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:51:18.994441    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:51:19.010328    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:51:19.010333    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:51:19.024024    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:51:19.024033    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:51:19.036557    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:51:19.036569    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:51:19.048367    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:51:19.048377    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:51:19.059959    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:19.059971    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:19.095541    4804 logs.go:123] Gathering logs for coredns [50b4f2e786a4] ...
	I1001 16:51:19.095554    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50b4f2e786a4"
	I1001 16:51:19.107223    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:19.107233    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:19.132064    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:51:19.132075    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:19.143977    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:19.143987    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:19.148814    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:51:19.148822    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:51:19.163387    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:51:19.163400    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:51:19.163424    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:51:19.163428    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:51:19.163431    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:51:19.163435    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:51:19.163438    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:51:25.032950    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:25.033078    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:25.046417    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:51:25.046509    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:25.060023    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:51:25.060110    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:25.070328    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:51:25.070401    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:25.081516    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:51:25.081603    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:25.092525    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:51:25.092611    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:25.103189    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:51:25.103277    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:25.113036    4927 logs.go:282] 0 containers: []
	W1001 16:51:25.113047    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:25.113120    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:25.123158    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:51:25.123173    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:51:25.123179    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:51:25.137690    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:25.137701    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:25.142510    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:25.142516    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:25.175821    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:51:25.175837    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:51:25.192184    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:51:25.192198    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:51:25.206110    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:51:25.206121    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:51:25.220438    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:51:25.220448    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:51:25.235469    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:51:25.235480    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:51:25.252958    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:25.252968    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:25.276097    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:25.276106    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:51:25.308974    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:51:25.308981    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:51:25.321019    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:51:25.321035    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:51:25.333403    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:51:25.333418    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:27.847339    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:29.167449    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:32.849521    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:32.849773    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:32.867637    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:51:32.867732    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:32.880950    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:51:32.881042    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:32.891924    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:51:32.892003    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:32.907793    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:51:32.907879    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:32.917867    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:51:32.917952    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:32.928667    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:51:32.928745    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:32.939179    4927 logs.go:282] 0 containers: []
	W1001 16:51:32.939192    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:32.939266    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:32.950251    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:51:32.950267    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:32.950272    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:32.975592    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:51:32.975601    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:32.986560    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:32.986570    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:51:33.021144    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:51:33.021152    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:51:33.032381    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:51:33.032397    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:51:33.049041    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:51:33.049052    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:51:33.062905    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:51:33.062915    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:51:33.074266    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:51:33.074281    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:51:33.088865    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:51:33.088878    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:51:33.100159    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:51:33.100172    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:51:33.117839    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:33.117851    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:33.122455    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:33.122464    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:33.157432    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:51:33.157447    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:51:35.671401    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:34.169737    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:34.169984    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:34.189172    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:51:34.189286    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:34.203313    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:51:34.203404    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:34.215560    4804 logs.go:282] 4 containers: [2a9fdf492bbf 50b4f2e786a4 4e2b1026af64 52703530d033]
	I1001 16:51:34.215647    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:34.228501    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:51:34.228590    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:34.239534    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:51:34.239616    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:34.250303    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:51:34.250383    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:34.260260    4804 logs.go:282] 0 containers: []
	W1001 16:51:34.260273    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:34.260337    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:34.272333    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:51:34.272353    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:51:34.272358    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:51:34.286632    4804 logs.go:123] Gathering logs for coredns [2a9fdf492bbf] ...
	I1001 16:51:34.286642    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a9fdf492bbf"
	I1001 16:51:34.298086    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:51:34.298098    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:51:34.309384    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:51:34.309397    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:51:34.327055    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:51:34.327069    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:34.339242    4804 logs.go:123] Gathering logs for coredns [50b4f2e786a4] ...
	I1001 16:51:34.339255    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50b4f2e786a4"
	I1001 16:51:34.351018    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:34.351031    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:34.355745    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:34.355754    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:34.391357    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:51:34.391368    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:51:34.408852    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:51:34.408866    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:51:34.423374    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:51:34.423385    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:51:34.437625    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:34.437634    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:34.462455    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:34.462461    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:51:34.480026    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:51:34.480116    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:51:34.496190    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:51:34.496194    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:51:34.510668    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:51:34.510680    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:51:34.525276    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:51:34.525286    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:51:34.525314    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:51:34.525319    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:51:34.525332    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:51:34.525337    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:51:34.525346    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:51:40.673577    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:40.673701    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:40.688512    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:51:40.688598    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:40.699115    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:51:40.699210    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:40.709930    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:51:40.710014    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:40.720505    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:51:40.720588    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:40.731354    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:51:40.731436    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:40.742317    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:51:40.742399    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:40.752669    4927 logs.go:282] 0 containers: []
	W1001 16:51:40.752680    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:40.752745    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:40.763116    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:51:40.763133    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:51:40.763138    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:51:40.777046    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:51:40.777055    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:51:40.789536    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:51:40.789548    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:51:40.801188    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:51:40.801197    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:51:40.815796    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:40.815807    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:51:40.850805    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:40.850815    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:40.855272    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:40.855284    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:40.888548    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:51:40.888559    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:51:40.903632    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:51:40.903643    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:51:40.920593    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:40.920606    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:40.945459    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:51:40.945475    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:51:40.959979    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:51:40.959992    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:51:40.971951    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:51:40.971966    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:43.485596    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:44.528869    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:48.487796    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:48.488109    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:48.508333    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:51:48.508455    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:48.522959    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:51:48.523055    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:48.535060    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:51:48.535150    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:48.545539    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:51:48.545616    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:48.556403    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:51:48.556488    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:48.566606    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:51:48.566694    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:48.577029    4927 logs.go:282] 0 containers: []
	W1001 16:51:48.577040    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:48.577113    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:48.587887    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:51:48.587903    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:51:48.587908    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:51:48.600250    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:51:48.600262    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:51:48.621393    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:48.621410    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:51:48.657522    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:51:48.657538    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:51:48.671347    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:51:48.671362    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:51:48.683348    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:51:48.683364    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:51:48.695062    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:51:48.695072    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:51:48.710579    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:51:48.710594    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:51:48.722184    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:48.722199    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:48.747482    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:51:48.747491    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:48.758576    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:48.758591    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:48.762959    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:48.762967    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:48.796875    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:51:48.796888    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:51:51.313732    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:49.531145    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:49.531348    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:49.548595    4804 logs.go:282] 1 containers: [30e3c75592d2]
	I1001 16:51:49.548707    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:49.561532    4804 logs.go:282] 1 containers: [4d4b8f092162]
	I1001 16:51:49.561615    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:49.577658    4804 logs.go:282] 4 containers: [2a9fdf492bbf 50b4f2e786a4 4e2b1026af64 52703530d033]
	I1001 16:51:49.577735    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:49.587822    4804 logs.go:282] 1 containers: [3110ccc6686b]
	I1001 16:51:49.587891    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:49.598563    4804 logs.go:282] 1 containers: [d915a2ca001d]
	I1001 16:51:49.598636    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:49.609757    4804 logs.go:282] 1 containers: [8c3d69cdc4f4]
	I1001 16:51:49.609840    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:49.625445    4804 logs.go:282] 0 containers: []
	W1001 16:51:49.625458    4804 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:49.625529    4804 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:49.636550    4804 logs.go:282] 1 containers: [e9d5eb4d5052]
	I1001 16:51:49.636567    4804 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:49.636574    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:49.641520    4804 logs.go:123] Gathering logs for coredns [2a9fdf492bbf] ...
	I1001 16:51:49.641530    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2a9fdf492bbf"
	I1001 16:51:49.653379    4804 logs.go:123] Gathering logs for coredns [4e2b1026af64] ...
	I1001 16:51:49.653390    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e2b1026af64"
	I1001 16:51:49.666391    4804 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:49.666401    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:49.725774    4804 logs.go:123] Gathering logs for kube-scheduler [3110ccc6686b] ...
	I1001 16:51:49.725784    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3110ccc6686b"
	I1001 16:51:49.740710    4804 logs.go:123] Gathering logs for container status ...
	I1001 16:51:49.740720    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:49.753032    4804 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:49.753042    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 16:51:49.769547    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:51:49.769638    4804 logs.go:138] Found kubelet problem: Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:51:49.785870    4804 logs.go:123] Gathering logs for coredns [50b4f2e786a4] ...
	I1001 16:51:49.785874    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50b4f2e786a4"
	I1001 16:51:49.797224    4804 logs.go:123] Gathering logs for coredns [52703530d033] ...
	I1001 16:51:49.797236    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52703530d033"
	I1001 16:51:49.809190    4804 logs.go:123] Gathering logs for kube-controller-manager [8c3d69cdc4f4] ...
	I1001 16:51:49.809201    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c3d69cdc4f4"
	I1001 16:51:49.826573    4804 logs.go:123] Gathering logs for kube-apiserver [30e3c75592d2] ...
	I1001 16:51:49.826585    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30e3c75592d2"
	I1001 16:51:49.840652    4804 logs.go:123] Gathering logs for etcd [4d4b8f092162] ...
	I1001 16:51:49.840664    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d4b8f092162"
	I1001 16:51:49.855311    4804 logs.go:123] Gathering logs for kube-proxy [d915a2ca001d] ...
	I1001 16:51:49.855320    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d915a2ca001d"
	I1001 16:51:49.867354    4804 logs.go:123] Gathering logs for storage-provisioner [e9d5eb4d5052] ...
	I1001 16:51:49.867364    4804 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9d5eb4d5052"
	I1001 16:51:49.878716    4804 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:49.878726    4804 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:49.902868    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:51:49.902878    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 16:51:49.902901    4804 out.go:270] X Problems detected in kubelet:
	W1001 16:51:49.902904    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: W1001 23:44:03.045433    4099 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	W1001 16:51:49.902908    4804 out.go:270]   Oct 01 23:44:03 running-upgrade-193000 kubelet[4099]: E1001 23:44:03.045448    4099 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-193000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-193000' and this object
	I1001 16:51:49.902912    4804 out.go:358] Setting ErrFile to fd 2...
	I1001 16:51:49.902915    4804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:51:56.315985    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:56.316173    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:56.329365    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:51:56.329444    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:56.345399    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:51:56.345473    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:56.355647    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:51:56.355731    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:56.367454    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:51:56.367548    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:56.378959    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:51:56.379045    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:56.389565    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:51:56.389640    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:56.400049    4927 logs.go:282] 0 containers: []
	W1001 16:51:56.400061    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:56.400127    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:56.410860    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:51:56.410877    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:51:56.410884    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:56.423217    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:56.423232    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:51:56.458939    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:51:56.458948    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:51:56.472648    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:51:56.472661    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:51:56.484314    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:51:56.484329    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:51:56.495039    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:51:56.495055    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:51:56.506880    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:51:56.506896    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:51:56.524083    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:56.524103    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:56.548213    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:56.548223    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:56.552285    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:56.552294    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:56.585934    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:51:56.585949    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:51:56.599973    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:51:56.599984    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:51:56.614008    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:51:56.614024    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:51:59.128154    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:59.902804    4804 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:52:04.900110    4804 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:52:04.905665    4804 out.go:201] 
	W1001 16:52:04.908636    4804 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1001 16:52:04.908649    4804 out.go:270] * 
	W1001 16:52:04.909662    4804 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:52:04.918561    4804 out.go:201] 
	I1001 16:52:04.125015    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:52:04.125312    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:52:04.149756    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:52:04.149898    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:52:04.165545    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:52:04.165651    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:52:04.183423    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:52:04.183506    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:52:04.193942    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:52:04.194027    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:52:04.204251    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:52:04.204336    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:52:04.215025    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:52:04.215094    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:52:04.225710    4927 logs.go:282] 0 containers: []
	W1001 16:52:04.225721    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:52:04.225802    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:52:04.239871    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:52:04.239887    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:52:04.239893    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:52:04.244397    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:52:04.244407    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:52:04.278778    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:52:04.278794    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:52:04.291115    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:52:04.291126    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:52:04.302880    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:52:04.302896    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:52:04.314652    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:52:04.314667    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:52:04.339990    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:52:04.340005    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:52:04.379553    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:52:04.379570    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:52:04.414572    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:52:04.414585    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:52:04.441098    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:52:04.441117    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:52:04.472459    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:52:04.472472    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:52:04.489463    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:52:04.489476    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:52:04.507198    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:52:04.507210    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:52:07.020274    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:52:12.019340    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:52:12.019727    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:52:12.054136    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:52:12.054306    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:52:12.073167    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:52:12.073289    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:52:12.089249    4927 logs.go:282] 4 containers: [4fb7dc6e2140 8ce201a253c1 f7caca5d7952 406124d13b16]
	I1001 16:52:12.089345    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:52:12.102809    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:52:12.102896    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:52:12.113052    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:52:12.113137    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:52:12.128603    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:52:12.128683    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:52:12.144004    4927 logs.go:282] 0 containers: []
	W1001 16:52:12.144023    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:52:12.144097    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:52:12.154865    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:52:12.154886    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:52:12.154892    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:52:12.169584    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:52:12.169594    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:52:12.188666    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:52:12.188678    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:52:12.224156    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:52:12.224168    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:52:12.240176    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:52:12.240187    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:52:12.252269    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:52:12.252279    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:52:12.264748    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:52:12.264759    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:52:12.269278    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:52:12.269290    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:52:12.284867    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:52:12.284879    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:52:12.301715    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:52:12.301727    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:52:12.313251    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:52:12.313266    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:52:12.338004    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:52:12.338013    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:52:12.373001    4927 logs.go:123] Gathering logs for coredns [4fb7dc6e2140] ...
	I1001 16:52:12.373011    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb7dc6e2140"
	I1001 16:52:12.384604    4927 logs.go:123] Gathering logs for coredns [8ce201a253c1] ...
	I1001 16:52:12.384615    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ce201a253c1"
	I1001 16:52:12.396108    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:52:12.396121    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:52:14.908605    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-10-01 23:41:41 UTC, ends at Tue 2024-10-01 23:52:20 UTC. --
	Oct 01 23:52:01 running-upgrade-193000 dockerd[3574]: time="2024-10-01T23:52:01.462208574Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5776fcc7a757ec9f501faf351176570ec27c34bbdf524181d40b7bedc746e766 pid=16193 runtime=io.containerd.runc.v2
	Oct 01 23:52:01 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:01Z" level=error msg="ContainerStats resp: {0x40007f33c0 linux}"
	Oct 01 23:52:01 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:01Z" level=error msg="ContainerStats resp: {0x40009a8940 linux}"
	Oct 01 23:52:02 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:02Z" level=error msg="ContainerStats resp: {0x40008e8840 linux}"
	Oct 01 23:52:02 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:02Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 01 23:52:03 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:03Z" level=error msg="ContainerStats resp: {0x40008e98c0 linux}"
	Oct 01 23:52:03 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:03Z" level=error msg="ContainerStats resp: {0x400090b000 linux}"
	Oct 01 23:52:03 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:03Z" level=error msg="ContainerStats resp: {0x400090b380 linux}"
	Oct 01 23:52:03 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:03Z" level=error msg="ContainerStats resp: {0x4000419b80 linux}"
	Oct 01 23:52:03 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:03Z" level=error msg="ContainerStats resp: {0x400040e080 linux}"
	Oct 01 23:52:03 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:03Z" level=error msg="ContainerStats resp: {0x400090bf40 linux}"
	Oct 01 23:52:03 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:03Z" level=error msg="ContainerStats resp: {0x400040e980 linux}"
	Oct 01 23:52:07 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:07Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 01 23:52:12 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:12Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 01 23:52:13 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:13Z" level=error msg="ContainerStats resp: {0x40005aac40 linux}"
	Oct 01 23:52:13 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:13Z" level=error msg="ContainerStats resp: {0x40005ab340 linux}"
	Oct 01 23:52:14 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:14Z" level=error msg="ContainerStats resp: {0x400040e940 linux}"
	Oct 01 23:52:15 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:15Z" level=error msg="ContainerStats resp: {0x4000359e80 linux}"
	Oct 01 23:52:15 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:15Z" level=error msg="ContainerStats resp: {0x4000894100 linux}"
	Oct 01 23:52:15 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:15Z" level=error msg="ContainerStats resp: {0x40008948c0 linux}"
	Oct 01 23:52:15 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:15Z" level=error msg="ContainerStats resp: {0x4000856240 linux}"
	Oct 01 23:52:15 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:15Z" level=error msg="ContainerStats resp: {0x4000895180 linux}"
	Oct 01 23:52:15 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:15Z" level=error msg="ContainerStats resp: {0x4000856040 linux}"
	Oct 01 23:52:15 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:15Z" level=error msg="ContainerStats resp: {0x40008564c0 linux}"
	Oct 01 23:52:17 running-upgrade-193000 cri-dockerd[3416]: time="2024-10-01T23:52:17Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	5776fcc7a757e       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   86e7dacb24380
	32c87a7cafbcc       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   78e13f33fee15
	2a9fdf492bbf0       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   86e7dacb24380
	50b4f2e786a48       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   78e13f33fee15
	e9d5eb4d50520       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   2fcb27026f79e
	d915a2ca001d2       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   18ffd83236083
	3110ccc6686bb       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   6bb448051aca2
	4d4b8f092162b       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   9fce3ed227d3f
	30e3c75592d29       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   c83c9ead12bf0
	8c3d69cdc4f48       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   d275d0323525c
	
	
	==> coredns [2a9fdf492bbf] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4817669661083608450.4551122433327990277. HINFO: read udp 10.244.0.2:45673->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4817669661083608450.4551122433327990277. HINFO: read udp 10.244.0.2:35804->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4817669661083608450.4551122433327990277. HINFO: read udp 10.244.0.2:36537->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4817669661083608450.4551122433327990277. HINFO: read udp 10.244.0.2:43966->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4817669661083608450.4551122433327990277. HINFO: read udp 10.244.0.2:54504->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4817669661083608450.4551122433327990277. HINFO: read udp 10.244.0.2:40804->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4817669661083608450.4551122433327990277. HINFO: read udp 10.244.0.2:56414->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4817669661083608450.4551122433327990277. HINFO: read udp 10.244.0.2:48876->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4817669661083608450.4551122433327990277. HINFO: read udp 10.244.0.2:60021->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4817669661083608450.4551122433327990277. HINFO: read udp 10.244.0.2:53219->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [32c87a7cafbc] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3818829374210081747.5627678582127605777. HINFO: read udp 10.244.0.3:59705->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3818829374210081747.5627678582127605777. HINFO: read udp 10.244.0.3:34128->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3818829374210081747.5627678582127605777. HINFO: read udp 10.244.0.3:44838->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3818829374210081747.5627678582127605777. HINFO: read udp 10.244.0.3:55521->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3818829374210081747.5627678582127605777. HINFO: read udp 10.244.0.3:49711->10.0.2.3:53: i/o timeout
	
	
	==> coredns [50b4f2e786a4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3119147969127592288.7511502162847916107. HINFO: read udp 10.244.0.3:40628->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3119147969127592288.7511502162847916107. HINFO: read udp 10.244.0.3:41781->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3119147969127592288.7511502162847916107. HINFO: read udp 10.244.0.3:48388->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3119147969127592288.7511502162847916107. HINFO: read udp 10.244.0.3:43997->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3119147969127592288.7511502162847916107. HINFO: read udp 10.244.0.3:53923->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3119147969127592288.7511502162847916107. HINFO: read udp 10.244.0.3:37170->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3119147969127592288.7511502162847916107. HINFO: read udp 10.244.0.3:41747->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3119147969127592288.7511502162847916107. HINFO: read udp 10.244.0.3:46327->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3119147969127592288.7511502162847916107. HINFO: read udp 10.244.0.3:59343->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5776fcc7a757] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 887070376912043280.1766382147745251073. HINFO: read udp 10.244.0.2:38454->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 887070376912043280.1766382147745251073. HINFO: read udp 10.244.0.2:42328->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 887070376912043280.1766382147745251073. HINFO: read udp 10.244.0.2:55615->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 887070376912043280.1766382147745251073. HINFO: read udp 10.244.0.2:48060->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 887070376912043280.1766382147745251073. HINFO: read udp 10.244.0.2:50796->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-193000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-193000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3
	                    minikube.k8s.io/name=running-upgrade-193000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T16_48_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 23:47:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-193000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 23:52:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 23:48:00 +0000   Tue, 01 Oct 2024 23:47:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 23:48:00 +0000   Tue, 01 Oct 2024 23:47:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 23:48:00 +0000   Tue, 01 Oct 2024 23:47:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 23:48:00 +0000   Tue, 01 Oct 2024 23:48:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-193000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc2c988391064eefa2e2f24b07244efc
	  System UUID:                fc2c988391064eefa2e2f24b07244efc
	  Boot ID:                    fd939b46-152d-4179-924f-6e27e0a7943d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-dnbtv                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m8s
	  kube-system                 coredns-6d4b75cb6d-z5l29                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m8s
	  kube-system                 etcd-running-upgrade-193000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m21s
	  kube-system                 kube-apiserver-running-upgrade-193000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-controller-manager-running-upgrade-193000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-proxy-qsbx7                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-scheduler-running-upgrade-193000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m7s   kube-proxy       
	  Normal  NodeReady                4m21s  kubelet          Node running-upgrade-193000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s  kubelet          Node running-upgrade-193000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s  kubelet          Node running-upgrade-193000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s  kubelet          Node running-upgrade-193000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m21s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m9s   node-controller  Node running-upgrade-193000 event: Registered Node running-upgrade-193000 in Controller
	
	
	==> dmesg <==
	[  +0.078844] systemd-fstab-generator[1164]: Ignoring "noauto" for root device
	[  +0.080151] systemd-fstab-generator[1175]: Ignoring "noauto" for root device
	[Oct 1 23:43] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.085138] systemd-fstab-generator[1326]: Ignoring "noauto" for root device
	[  +0.080152] systemd-fstab-generator[1337]: Ignoring "noauto" for root device
	[ +15.017319] systemd-fstab-generator[1621]: Ignoring "noauto" for root device
	[  +0.343338] kauditd_printk_skb: 29 callbacks suppressed
	[ +14.289112] systemd-fstab-generator[2293]: Ignoring "noauto" for root device
	[  +2.618390] systemd-fstab-generator[2571]: Ignoring "noauto" for root device
	[  +0.146159] systemd-fstab-generator[2605]: Ignoring "noauto" for root device
	[  +0.084337] systemd-fstab-generator[2616]: Ignoring "noauto" for root device
	[  +0.101926] systemd-fstab-generator[2629]: Ignoring "noauto" for root device
	[  +2.576909] kauditd_printk_skb: 8 callbacks suppressed
	[  +0.201008] systemd-fstab-generator[3370]: Ignoring "noauto" for root device
	[  +0.082448] systemd-fstab-generator[3384]: Ignoring "noauto" for root device
	[  +0.075844] systemd-fstab-generator[3395]: Ignoring "noauto" for root device
	[  +0.090172] systemd-fstab-generator[3409]: Ignoring "noauto" for root device
	[  +2.405911] systemd-fstab-generator[3561]: Ignoring "noauto" for root device
	[  +4.429059] systemd-fstab-generator[3968]: Ignoring "noauto" for root device
	[  +1.060505] systemd-fstab-generator[4093]: Ignoring "noauto" for root device
	[Oct 1 23:44] kauditd_printk_skb: 68 callbacks suppressed
	[Oct 1 23:47] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.317859] systemd-fstab-generator[10703]: Ignoring "noauto" for root device
	[  +6.132775] systemd-fstab-generator[11320]: Ignoring "noauto" for root device
	[  +0.472527] systemd-fstab-generator[11452]: Ignoring "noauto" for root device
	
	
	==> etcd [4d4b8f092162] <==
	{"level":"info","ts":"2024-10-01T23:47:55.180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-10-01T23:47:55.180Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-10-01T23:47:55.183Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-01T23:47:55.183Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-01T23:47:55.183Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-01T23:47:55.183Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-01T23:47:55.183Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-01T23:47:56.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-01T23:47:56.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-01T23:47:56.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-10-01T23:47:56.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-10-01T23:47:56.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-01T23:47:56.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-10-01T23:47:56.179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-01T23:47:56.179Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-193000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T23:47:56.179Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T23:47:56.180Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:47:56.180Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T23:47:56.180Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-01T23:47:56.180Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:47:56.180Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:47:56.181Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T23:47:56.181Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T23:47:56.181Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-10-01T23:47:56.182Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:52:21 up 10 min,  0 users,  load average: 0.16, 0.23, 0.13
	Linux running-upgrade-193000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [30e3c75592d2] <==
	I1001 23:47:57.402922       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1001 23:47:57.419313       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1001 23:47:57.419364       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1001 23:47:57.420708       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1001 23:47:57.420761       1 cache.go:39] Caches are synced for autoregister controller
	I1001 23:47:57.424457       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1001 23:47:57.438757       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1001 23:47:58.165821       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1001 23:47:58.329566       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1001 23:47:58.334452       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1001 23:47:58.334796       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1001 23:47:58.472926       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1001 23:47:58.482739       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1001 23:47:58.590553       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1001 23:47:58.592647       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1001 23:47:58.593037       1 controller.go:611] quota admission added evaluator for: endpoints
	I1001 23:47:58.594314       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1001 23:47:59.453354       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1001 23:48:00.163861       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1001 23:48:00.167031       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1001 23:48:00.180199       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1001 23:48:00.214098       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 23:48:12.805662       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1001 23:48:12.955609       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1001 23:48:13.373900       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [8c3d69cdc4f4] <==
	I1001 23:48:12.300427       1 shared_informer.go:262] Caches are synced for service account
	I1001 23:48:12.304385       1 shared_informer.go:262] Caches are synced for PVC protection
	I1001 23:48:12.304399       1 shared_informer.go:262] Caches are synced for cronjob
	I1001 23:48:12.304414       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1001 23:48:12.304450       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1001 23:48:12.306138       1 shared_informer.go:262] Caches are synced for TTL after finished
	I1001 23:48:12.308767       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1001 23:48:12.316147       1 shared_informer.go:262] Caches are synced for persistent volume
	I1001 23:48:12.353564       1 shared_informer.go:262] Caches are synced for PV protection
	I1001 23:48:12.373832       1 shared_informer.go:262] Caches are synced for expand
	I1001 23:48:12.404378       1 shared_informer.go:262] Caches are synced for attach detach
	I1001 23:48:12.458207       1 shared_informer.go:262] Caches are synced for resource quota
	I1001 23:48:12.500523       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1001 23:48:12.500551       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1001 23:48:12.500569       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1001 23:48:12.500576       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1001 23:48:12.505943       1 shared_informer.go:262] Caches are synced for resource quota
	I1001 23:48:12.554070       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1001 23:48:12.809306       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qsbx7"
	I1001 23:48:12.927347       1 shared_informer.go:262] Caches are synced for garbage collector
	I1001 23:48:12.960140       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1001 23:48:13.004029       1 shared_informer.go:262] Caches are synced for garbage collector
	I1001 23:48:13.004060       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1001 23:48:13.308086       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-dnbtv"
	I1001 23:48:13.313468       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-z5l29"
	
	
	==> kube-proxy [d915a2ca001d] <==
	I1001 23:48:13.340555       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1001 23:48:13.340971       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1001 23:48:13.340990       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1001 23:48:13.372139       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1001 23:48:13.372152       1 server_others.go:206] "Using iptables Proxier"
	I1001 23:48:13.372168       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1001 23:48:13.372270       1 server.go:661] "Version info" version="v1.24.1"
	I1001 23:48:13.372278       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 23:48:13.373122       1 config.go:444] "Starting node config controller"
	I1001 23:48:13.373128       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1001 23:48:13.374001       1 config.go:317] "Starting service config controller"
	I1001 23:48:13.374005       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1001 23:48:13.374013       1 config.go:226] "Starting endpoint slice config controller"
	I1001 23:48:13.374015       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1001 23:48:13.473928       1 shared_informer.go:262] Caches are synced for node config
	I1001 23:48:13.474978       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1001 23:48:13.475024       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [3110ccc6686b] <==
	W1001 23:47:57.385020       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 23:47:57.385954       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1001 23:47:57.385030       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 23:47:57.386041       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1001 23:47:57.385041       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1001 23:47:57.385082       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1001 23:47:57.385093       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1001 23:47:57.385103       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1001 23:47:57.385112       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1001 23:47:57.385122       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 23:47:57.386388       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1001 23:47:57.386392       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 23:47:57.386496       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1001 23:47:57.386500       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1001 23:47:57.386503       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1001 23:47:57.386505       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1001 23:47:58.228142       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 23:47:58.228206       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1001 23:47:58.353525       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 23:47:58.353555       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1001 23:47:58.376467       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 23:47:58.376556       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1001 23:47:58.432383       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1001 23:47:58.432403       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1001 23:47:58.982743       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-10-01 23:41:41 UTC, ends at Tue 2024-10-01 23:52:21 UTC. --
	Oct 01 23:48:02 running-upgrade-193000 kubelet[11326]: I1001 23:48:02.252137   11326 kubelet_getters.go:300] "Path does not exist" path="/var/lib/kubelet/pods/09c7e0ac-2ab1-494e-a51d-a0a7a405af09/volumes"
	Oct 01 23:48:02 running-upgrade-193000 kubelet[11326]: I1001 23:48:02.393298   11326 request.go:601] Waited for 1.124447756s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Oct 01 23:48:02 running-upgrade-193000 kubelet[11326]: E1001 23:48:02.397284   11326 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-193000\" already exists" pod="kube-system/etcd-running-upgrade-193000"
	Oct 01 23:48:12 running-upgrade-193000 kubelet[11326]: I1001 23:48:12.271096   11326 topology_manager.go:200] "Topology Admit Handler"
	Oct 01 23:48:12 running-upgrade-193000 kubelet[11326]: I1001 23:48:12.350890   11326 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 01 23:48:12 running-upgrade-193000 kubelet[11326]: I1001 23:48:12.350910   11326 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtc2j\" (UniqueName: \"kubernetes.io/projected/c8586060-315e-4520-816f-c4f4ad2bf68b-kube-api-access-xtc2j\") pod \"storage-provisioner\" (UID: \"c8586060-315e-4520-816f-c4f4ad2bf68b\") " pod="kube-system/storage-provisioner"
	Oct 01 23:48:12 running-upgrade-193000 kubelet[11326]: I1001 23:48:12.350926   11326 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c8586060-315e-4520-816f-c4f4ad2bf68b-tmp\") pod \"storage-provisioner\" (UID: \"c8586060-315e-4520-816f-c4f4ad2bf68b\") " pod="kube-system/storage-provisioner"
	Oct 01 23:48:12 running-upgrade-193000 kubelet[11326]: I1001 23:48:12.351202   11326 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 01 23:48:12 running-upgrade-193000 kubelet[11326]: E1001 23:48:12.455370   11326 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 01 23:48:12 running-upgrade-193000 kubelet[11326]: E1001 23:48:12.455390   11326 projected.go:192] Error preparing data for projected volume kube-api-access-xtc2j for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Oct 01 23:48:12 running-upgrade-193000 kubelet[11326]: E1001 23:48:12.455426   11326 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/c8586060-315e-4520-816f-c4f4ad2bf68b-kube-api-access-xtc2j podName:c8586060-315e-4520-816f-c4f4ad2bf68b nodeName:}" failed. No retries permitted until 2024-10-01 23:48:12.955413389 +0000 UTC m=+12.803713817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xtc2j" (UniqueName: "kubernetes.io/projected/c8586060-315e-4520-816f-c4f4ad2bf68b-kube-api-access-xtc2j") pod "storage-provisioner" (UID: "c8586060-315e-4520-816f-c4f4ad2bf68b") : configmap "kube-root-ca.crt" not found
	Oct 01 23:48:12 running-upgrade-193000 kubelet[11326]: I1001 23:48:12.812255   11326 topology_manager.go:200] "Topology Admit Handler"
	Oct 01 23:48:12 running-upgrade-193000 kubelet[11326]: I1001 23:48:12.854669   11326 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pzxm\" (UniqueName: \"kubernetes.io/projected/15c21e6e-3bf8-4f7b-a9ec-38a3af917eea-kube-api-access-8pzxm\") pod \"kube-proxy-qsbx7\" (UID: \"15c21e6e-3bf8-4f7b-a9ec-38a3af917eea\") " pod="kube-system/kube-proxy-qsbx7"
	Oct 01 23:48:12 running-upgrade-193000 kubelet[11326]: I1001 23:48:12.854691   11326 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15c21e6e-3bf8-4f7b-a9ec-38a3af917eea-xtables-lock\") pod \"kube-proxy-qsbx7\" (UID: \"15c21e6e-3bf8-4f7b-a9ec-38a3af917eea\") " pod="kube-system/kube-proxy-qsbx7"
	Oct 01 23:48:12 running-upgrade-193000 kubelet[11326]: I1001 23:48:12.854701   11326 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15c21e6e-3bf8-4f7b-a9ec-38a3af917eea-lib-modules\") pod \"kube-proxy-qsbx7\" (UID: \"15c21e6e-3bf8-4f7b-a9ec-38a3af917eea\") " pod="kube-system/kube-proxy-qsbx7"
	Oct 01 23:48:12 running-upgrade-193000 kubelet[11326]: I1001 23:48:12.854718   11326 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/15c21e6e-3bf8-4f7b-a9ec-38a3af917eea-kube-proxy\") pod \"kube-proxy-qsbx7\" (UID: \"15c21e6e-3bf8-4f7b-a9ec-38a3af917eea\") " pod="kube-system/kube-proxy-qsbx7"
	Oct 01 23:48:13 running-upgrade-193000 kubelet[11326]: I1001 23:48:13.315778   11326 topology_manager.go:200] "Topology Admit Handler"
	Oct 01 23:48:13 running-upgrade-193000 kubelet[11326]: I1001 23:48:13.318372   11326 topology_manager.go:200] "Topology Admit Handler"
	Oct 01 23:48:13 running-upgrade-193000 kubelet[11326]: I1001 23:48:13.350779   11326 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="2fcb27026f79ee845c344ebb7c5e245656607b088f029995a1cab325fc1970ba"
	Oct 01 23:48:13 running-upgrade-193000 kubelet[11326]: I1001 23:48:13.358268   11326 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z9fh\" (UniqueName: \"kubernetes.io/projected/54016b40-4c5c-4f7a-9897-cea7725ffefc-kube-api-access-7z9fh\") pod \"coredns-6d4b75cb6d-z5l29\" (UID: \"54016b40-4c5c-4f7a-9897-cea7725ffefc\") " pod="kube-system/coredns-6d4b75cb6d-z5l29"
	Oct 01 23:48:13 running-upgrade-193000 kubelet[11326]: I1001 23:48:13.358318   11326 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2k69\" (UniqueName: \"kubernetes.io/projected/70909c2d-8641-4d3a-8338-ba6d8b2ce7d0-kube-api-access-c2k69\") pod \"coredns-6d4b75cb6d-dnbtv\" (UID: \"70909c2d-8641-4d3a-8338-ba6d8b2ce7d0\") " pod="kube-system/coredns-6d4b75cb6d-dnbtv"
	Oct 01 23:48:13 running-upgrade-193000 kubelet[11326]: I1001 23:48:13.358345   11326 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70909c2d-8641-4d3a-8338-ba6d8b2ce7d0-config-volume\") pod \"coredns-6d4b75cb6d-dnbtv\" (UID: \"70909c2d-8641-4d3a-8338-ba6d8b2ce7d0\") " pod="kube-system/coredns-6d4b75cb6d-dnbtv"
	Oct 01 23:48:13 running-upgrade-193000 kubelet[11326]: I1001 23:48:13.358357   11326 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54016b40-4c5c-4f7a-9897-cea7725ffefc-config-volume\") pod \"coredns-6d4b75cb6d-z5l29\" (UID: \"54016b40-4c5c-4f7a-9897-cea7725ffefc\") " pod="kube-system/coredns-6d4b75cb6d-z5l29"
	Oct 01 23:52:01 running-upgrade-193000 kubelet[11326]: I1001 23:52:01.723186   11326 scope.go:110] "RemoveContainer" containerID="4e2b1026af649dd9c9e5210bca6024ab592ebe7bef4f3e7a803537f9a9c32d8e"
	Oct 01 23:52:01 running-upgrade-193000 kubelet[11326]: I1001 23:52:01.737828   11326 scope.go:110] "RemoveContainer" containerID="52703530d03366647e7b59bc883112affbebc99680961d9c4b890e208e037d69"
	
	
	==> storage-provisioner [e9d5eb4d5052] <==
	I1001 23:48:13.411080       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 23:48:13.414759       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 23:48:13.414777       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 23:48:13.418222       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 23:48:13.418305       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-193000_07dfe8a4-ee31-4868-bce5-20c7481b81e6!
	I1001 23:48:13.419873       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4c3fc21b-aeca-4c4e-a39e-d8f7777f9a44", APIVersion:"v1", ResourceVersion:"360", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-193000_07dfe8a4-ee31-4868-bce5-20c7481b81e6 became leader
	I1001 23:48:13.518561       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-193000_07dfe8a4-ee31-4868-bce5-20c7481b81e6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-193000 -n running-upgrade-193000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-193000 -n running-upgrade-193000: exit status 2 (15.710167792s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-193000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-193000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-193000
--- FAIL: TestRunningBinaryUpgrade (708.87s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.29s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-407000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-407000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.897883583s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-407000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-407000" primary control-plane node in "kubernetes-upgrade-407000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-407000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:43:51.133363    4855 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:43:51.133483    4855 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:43:51.133486    4855 out.go:358] Setting ErrFile to fd 2...
	I1001 16:43:51.133497    4855 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:43:51.133620    4855 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:43:51.134651    4855 out.go:352] Setting JSON to false
	I1001 16:43:51.151157    4855 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4399,"bootTime":1727821832,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:43:51.151221    4855 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:43:51.157520    4855 out.go:177] * [kubernetes-upgrade-407000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:43:51.164442    4855 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:43:51.164517    4855 notify.go:220] Checking for updates...
	I1001 16:43:51.171379    4855 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:43:51.174406    4855 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:43:51.177463    4855 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:43:51.180404    4855 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:43:51.183475    4855 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:43:51.186784    4855 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:43:51.186848    4855 config.go:182] Loaded profile config "running-upgrade-193000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 16:43:51.186902    4855 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:43:51.191421    4855 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:43:51.198481    4855 start.go:297] selected driver: qemu2
	I1001 16:43:51.198488    4855 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:43:51.198495    4855 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:43:51.200604    4855 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:43:51.203382    4855 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:43:51.206484    4855 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 16:43:51.206504    4855 cni.go:84] Creating CNI manager for ""
	I1001 16:43:51.206534    4855 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1001 16:43:51.206558    4855 start.go:340] cluster config:
	{Name:kubernetes-upgrade-407000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-407000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:43:51.210049    4855 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:43:51.217520    4855 out.go:177] * Starting "kubernetes-upgrade-407000" primary control-plane node in "kubernetes-upgrade-407000" cluster
	I1001 16:43:51.221496    4855 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 16:43:51.221513    4855 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1001 16:43:51.221521    4855 cache.go:56] Caching tarball of preloaded images
	I1001 16:43:51.221604    4855 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:43:51.221610    4855 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1001 16:43:51.221672    4855 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/kubernetes-upgrade-407000/config.json ...
	I1001 16:43:51.221683    4855 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/kubernetes-upgrade-407000/config.json: {Name:mk9ee0a433387d9fb311d7e579bb43f843b7e4cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:43:51.221999    4855 start.go:360] acquireMachinesLock for kubernetes-upgrade-407000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:43:51.222034    4855 start.go:364] duration metric: took 25.875µs to acquireMachinesLock for "kubernetes-upgrade-407000"
	I1001 16:43:51.222045    4855 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-407000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:43:51.222070    4855 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:43:51.226398    4855 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 16:43:51.243373    4855 start.go:159] libmachine.API.Create for "kubernetes-upgrade-407000" (driver="qemu2")
	I1001 16:43:51.243409    4855 client.go:168] LocalClient.Create starting
	I1001 16:43:51.243466    4855 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:43:51.243497    4855 main.go:141] libmachine: Decoding PEM data...
	I1001 16:43:51.243507    4855 main.go:141] libmachine: Parsing certificate...
	I1001 16:43:51.243549    4855 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:43:51.243572    4855 main.go:141] libmachine: Decoding PEM data...
	I1001 16:43:51.243580    4855 main.go:141] libmachine: Parsing certificate...
	I1001 16:43:51.243903    4855 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:43:51.404919    4855 main.go:141] libmachine: Creating SSH key...
	I1001 16:43:51.547896    4855 main.go:141] libmachine: Creating Disk image...
	I1001 16:43:51.547906    4855 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:43:51.548161    4855 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/disk.qcow2
	I1001 16:43:51.557182    4855 main.go:141] libmachine: STDOUT: 
	I1001 16:43:51.557200    4855 main.go:141] libmachine: STDERR: 
	I1001 16:43:51.557254    4855 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/disk.qcow2 +20000M
	I1001 16:43:51.565151    4855 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:43:51.565166    4855 main.go:141] libmachine: STDERR: 
	I1001 16:43:51.565183    4855 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/disk.qcow2
	I1001 16:43:51.565190    4855 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:43:51.565204    4855 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:43:51.565233    4855 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:c5:dc:e1:d9:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/disk.qcow2
	I1001 16:43:51.566826    4855 main.go:141] libmachine: STDOUT: 
	I1001 16:43:51.566842    4855 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:43:51.566862    4855 client.go:171] duration metric: took 323.449333ms to LocalClient.Create
	I1001 16:43:53.569029    4855 start.go:128] duration metric: took 2.3469695s to createHost
	I1001 16:43:53.569079    4855 start.go:83] releasing machines lock for "kubernetes-upgrade-407000", held for 2.347062584s
	W1001 16:43:53.569129    4855 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:43:53.574850    4855 out.go:177] * Deleting "kubernetes-upgrade-407000" in qemu2 ...
	W1001 16:43:53.597927    4855 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:43:53.597947    4855 start.go:729] Will try again in 5 seconds ...
	I1001 16:43:58.600023    4855 start.go:360] acquireMachinesLock for kubernetes-upgrade-407000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:43:58.600279    4855 start.go:364] duration metric: took 215.875µs to acquireMachinesLock for "kubernetes-upgrade-407000"
	I1001 16:43:58.600341    4855 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-407000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:43:58.600443    4855 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:43:58.611786    4855 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 16:43:58.648439    4855 start.go:159] libmachine.API.Create for "kubernetes-upgrade-407000" (driver="qemu2")
	I1001 16:43:58.648486    4855 client.go:168] LocalClient.Create starting
	I1001 16:43:58.648594    4855 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:43:58.648657    4855 main.go:141] libmachine: Decoding PEM data...
	I1001 16:43:58.648682    4855 main.go:141] libmachine: Parsing certificate...
	I1001 16:43:58.648742    4855 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:43:58.648782    4855 main.go:141] libmachine: Decoding PEM data...
	I1001 16:43:58.648793    4855 main.go:141] libmachine: Parsing certificate...
	I1001 16:43:58.649406    4855 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:43:58.812575    4855 main.go:141] libmachine: Creating SSH key...
	I1001 16:43:58.941328    4855 main.go:141] libmachine: Creating Disk image...
	I1001 16:43:58.941338    4855 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:43:58.941569    4855 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/disk.qcow2
	I1001 16:43:58.950595    4855 main.go:141] libmachine: STDOUT: 
	I1001 16:43:58.950614    4855 main.go:141] libmachine: STDERR: 
	I1001 16:43:58.950688    4855 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/disk.qcow2 +20000M
	I1001 16:43:58.958657    4855 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:43:58.958675    4855 main.go:141] libmachine: STDERR: 
	I1001 16:43:58.958686    4855 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/disk.qcow2
	I1001 16:43:58.958692    4855 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:43:58.958705    4855 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:43:58.958736    4855 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:4b:49:b1:90:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/disk.qcow2
	I1001 16:43:58.960334    4855 main.go:141] libmachine: STDOUT: 
	I1001 16:43:58.960349    4855 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:43:58.960362    4855 client.go:171] duration metric: took 311.872ms to LocalClient.Create
	I1001 16:44:00.962556    4855 start.go:128] duration metric: took 2.362101625s to createHost
	I1001 16:44:00.962630    4855 start.go:83] releasing machines lock for "kubernetes-upgrade-407000", held for 2.362357333s
	W1001 16:44:00.963132    4855 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-407000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-407000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:44:00.979084    4855 out.go:201] 
	W1001 16:44:00.982578    4855 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:44:00.982775    4855 out.go:270] * 
	* 
	W1001 16:44:00.984118    4855 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:44:00.990934    4855 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-407000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-407000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-407000: (2.993758041s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-407000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-407000 status --format={{.Host}}: exit status 7 (38.318583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-407000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-407000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.17626475s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-407000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-407000" primary control-plane node in "kubernetes-upgrade-407000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-407000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-407000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:44:04.064871    4892 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:44:04.065022    4892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:44:04.065026    4892 out.go:358] Setting ErrFile to fd 2...
	I1001 16:44:04.065028    4892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:44:04.065165    4892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:44:04.066224    4892 out.go:352] Setting JSON to false
	I1001 16:44:04.082514    4892 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4412,"bootTime":1727821832,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:44:04.082590    4892 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:44:04.086509    4892 out.go:177] * [kubernetes-upgrade-407000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:44:04.093475    4892 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:44:04.093510    4892 notify.go:220] Checking for updates...
	I1001 16:44:04.099482    4892 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:44:04.102425    4892 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:44:04.105510    4892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:44:04.108396    4892 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:44:04.111453    4892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:44:04.114799    4892 config.go:182] Loaded profile config "kubernetes-upgrade-407000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1001 16:44:04.115026    4892 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:44:04.118443    4892 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 16:44:04.125464    4892 start.go:297] selected driver: qemu2
	I1001 16:44:04.125471    4892 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-407000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:44:04.125529    4892 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:44:04.127559    4892 cni.go:84] Creating CNI manager for ""
	I1001 16:44:04.127596    4892 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:44:04.127612    4892 start.go:340] cluster config:
	{Name:kubernetes-upgrade-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-407000 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:44:04.130826    4892 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:44:04.137450    4892 out.go:177] * Starting "kubernetes-upgrade-407000" primary control-plane node in "kubernetes-upgrade-407000" cluster
	I1001 16:44:04.141511    4892 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:44:04.141526    4892 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:44:04.141542    4892 cache.go:56] Caching tarball of preloaded images
	I1001 16:44:04.141618    4892 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:44:04.141623    4892 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:44:04.141685    4892 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/kubernetes-upgrade-407000/config.json ...
	I1001 16:44:04.142034    4892 start.go:360] acquireMachinesLock for kubernetes-upgrade-407000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:44:04.142060    4892 start.go:364] duration metric: took 20.834µs to acquireMachinesLock for "kubernetes-upgrade-407000"
	I1001 16:44:04.142068    4892 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:44:04.142073    4892 fix.go:54] fixHost starting: 
	I1001 16:44:04.142205    4892 fix.go:112] recreateIfNeeded on kubernetes-upgrade-407000: state=Stopped err=<nil>
	W1001 16:44:04.142213    4892 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:44:04.150481    4892 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-407000" ...
	I1001 16:44:04.154423    4892 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:44:04.154461    4892 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:4b:49:b1:90:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/disk.qcow2
	I1001 16:44:04.156323    4892 main.go:141] libmachine: STDOUT: 
	I1001 16:44:04.156336    4892 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:44:04.156367    4892 fix.go:56] duration metric: took 14.293958ms for fixHost
	I1001 16:44:04.156371    4892 start.go:83] releasing machines lock for "kubernetes-upgrade-407000", held for 14.307542ms
	W1001 16:44:04.156377    4892 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:44:04.156405    4892 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:44:04.156410    4892 start.go:729] Will try again in 5 seconds ...
	I1001 16:44:09.158563    4892 start.go:360] acquireMachinesLock for kubernetes-upgrade-407000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:44:09.159081    4892 start.go:364] duration metric: took 413.166µs to acquireMachinesLock for "kubernetes-upgrade-407000"
	I1001 16:44:09.159231    4892 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:44:09.159292    4892 fix.go:54] fixHost starting: 
	I1001 16:44:09.159998    4892 fix.go:112] recreateIfNeeded on kubernetes-upgrade-407000: state=Stopped err=<nil>
	W1001 16:44:09.160023    4892 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:44:09.167360    4892 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-407000" ...
	I1001 16:44:09.171360    4892 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:44:09.171512    4892 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:4b:49:b1:90:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubernetes-upgrade-407000/disk.qcow2
	I1001 16:44:09.179751    4892 main.go:141] libmachine: STDOUT: 
	I1001 16:44:09.179795    4892 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:44:09.179863    4892 fix.go:56] duration metric: took 20.615125ms for fixHost
	I1001 16:44:09.179876    4892 start.go:83] releasing machines lock for "kubernetes-upgrade-407000", held for 20.773875ms
	W1001 16:44:09.180040    4892 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-407000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-407000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:44:09.185639    4892 out.go:201] 
	W1001 16:44:09.189373    4892 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:44:09.189398    4892 out.go:270] * 
	* 
	W1001 16:44:09.191369    4892 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:44:09.200359    4892 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-407000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-407000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-407000 version --output=json: exit status 1 (62.810167ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-407000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-10-01 16:44:09.278157 -0700 PDT m=+3442.170138459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-407000 -n kubernetes-upgrade-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-407000 -n kubernetes-upgrade-407000: exit status 7 (32.870583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-407000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-407000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-407000
--- FAIL: TestKubernetesUpgrade (18.29s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.05s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
E1001 16:40:05.677335    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19740
- KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2563695065/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.05s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.72s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19740
- KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3785163987/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (580.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2154272767 start -p stopped-upgrade-342000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2154272767 start -p stopped-upgrade-342000 --memory=2200 --vm-driver=qemu2 : (45.645384833s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2154272767 -p stopped-upgrade-342000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2154272767 -p stopped-upgrade-342000 stop: (12.115483291s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-342000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E1001 16:45:17.692308    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:47:02.576767    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:47:14.599275    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:52:02.566604    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-342000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.727645709s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-342000" primary control-plane node in "stopped-upgrade-342000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-342000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:45:11.870838    4927 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:45:11.871035    4927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:45:11.871039    4927 out.go:358] Setting ErrFile to fd 2...
	I1001 16:45:11.871046    4927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:45:11.871197    4927 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:45:11.872422    4927 out.go:352] Setting JSON to false
	I1001 16:45:11.892221    4927 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4479,"bootTime":1727821832,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:45:11.892290    4927 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:45:11.897388    4927 out.go:177] * [stopped-upgrade-342000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:45:11.904309    4927 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:45:11.904358    4927 notify.go:220] Checking for updates...
	I1001 16:45:11.912362    4927 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:45:11.916354    4927 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:45:11.919380    4927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:45:11.922348    4927 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:45:11.925331    4927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:45:11.928584    4927 config.go:182] Loaded profile config "stopped-upgrade-342000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 16:45:11.931382    4927 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1001 16:45:11.934338    4927 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:45:11.938323    4927 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 16:45:11.945251    4927 start.go:297] selected driver: qemu2
	I1001 16:45:11.945256    4927 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50522 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 16:45:11.945307    4927 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:45:11.947654    4927 cni.go:84] Creating CNI manager for ""
	I1001 16:45:11.947692    4927 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:45:11.947724    4927 start.go:340] cluster config:
	{Name:stopped-upgrade-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50522 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 16:45:11.947785    4927 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:45:11.956186    4927 out.go:177] * Starting "stopped-upgrade-342000" primary control-plane node in "stopped-upgrade-342000" cluster
	I1001 16:45:11.960296    4927 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1001 16:45:11.960309    4927 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1001 16:45:11.960314    4927 cache.go:56] Caching tarball of preloaded images
	I1001 16:45:11.960360    4927 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:45:11.960365    4927 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1001 16:45:11.960408    4927 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/config.json ...
	I1001 16:45:11.960813    4927 start.go:360] acquireMachinesLock for stopped-upgrade-342000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:45:11.960844    4927 start.go:364] duration metric: took 25.334µs to acquireMachinesLock for "stopped-upgrade-342000"
	I1001 16:45:11.960851    4927 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:45:11.960856    4927 fix.go:54] fixHost starting: 
	I1001 16:45:11.960976    4927 fix.go:112] recreateIfNeeded on stopped-upgrade-342000: state=Stopped err=<nil>
	W1001 16:45:11.960984    4927 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:45:11.968328    4927 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-342000" ...
	I1001 16:45:11.972340    4927 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:45:11.972400    4927 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50486-:22,hostfwd=tcp::50487-:2376,hostname=stopped-upgrade-342000 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/disk.qcow2
	I1001 16:45:12.017358    4927 main.go:141] libmachine: STDOUT: 
	I1001 16:45:12.017392    4927 main.go:141] libmachine: STDERR: 
	I1001 16:45:12.017399    4927 main.go:141] libmachine: Waiting for VM to start (ssh -p 50486 docker@127.0.0.1)...
	I1001 16:45:31.678531    4927 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/config.json ...
	I1001 16:45:31.679231    4927 machine.go:93] provisionDockerMachine start ...
	I1001 16:45:31.679433    4927 main.go:141] libmachine: Using SSH client type: native
	I1001 16:45:31.679890    4927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f65c00] 0x102f68440 <nil>  [] 0s} localhost 50486 <nil> <nil>}
	I1001 16:45:31.679904    4927 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 16:45:31.757789    4927 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1001 16:45:31.757829    4927 buildroot.go:166] provisioning hostname "stopped-upgrade-342000"
	I1001 16:45:31.757997    4927 main.go:141] libmachine: Using SSH client type: native
	I1001 16:45:31.758265    4927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f65c00] 0x102f68440 <nil>  [] 0s} localhost 50486 <nil> <nil>}
	I1001 16:45:31.758281    4927 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-342000 && echo "stopped-upgrade-342000" | sudo tee /etc/hostname
	I1001 16:45:31.826489    4927 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-342000
	
	I1001 16:45:31.826576    4927 main.go:141] libmachine: Using SSH client type: native
	I1001 16:45:31.826770    4927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f65c00] 0x102f68440 <nil>  [] 0s} localhost 50486 <nil> <nil>}
	I1001 16:45:31.826783    4927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-342000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-342000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-342000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 16:45:31.886892    4927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 16:45:31.886904    4927 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19740-1141/.minikube CaCertPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19740-1141/.minikube}
	I1001 16:45:31.886920    4927 buildroot.go:174] setting up certificates
	I1001 16:45:31.886925    4927 provision.go:84] configureAuth start
	I1001 16:45:31.886932    4927 provision.go:143] copyHostCerts
	I1001 16:45:31.887023    4927 exec_runner.go:144] found /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.pem, removing ...
	I1001 16:45:31.887034    4927 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.pem
	I1001 16:45:31.887296    4927 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.pem (1078 bytes)
	I1001 16:45:31.887478    4927 exec_runner.go:144] found /Users/jenkins/minikube-integration/19740-1141/.minikube/cert.pem, removing ...
	I1001 16:45:31.887482    4927 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19740-1141/.minikube/cert.pem
	I1001 16:45:31.887539    4927 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19740-1141/.minikube/cert.pem (1123 bytes)
	I1001 16:45:31.887657    4927 exec_runner.go:144] found /Users/jenkins/minikube-integration/19740-1141/.minikube/key.pem, removing ...
	I1001 16:45:31.887660    4927 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19740-1141/.minikube/key.pem
	I1001 16:45:31.887717    4927 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19740-1141/.minikube/key.pem (1679 bytes)
	I1001 16:45:31.887810    4927 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-342000 san=[127.0.0.1 localhost minikube stopped-upgrade-342000]
	I1001 16:45:31.969219    4927 provision.go:177] copyRemoteCerts
	I1001 16:45:31.969254    4927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 16:45:31.969262    4927 sshutil.go:53] new ssh client: &{IP:localhost Port:50486 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/id_rsa Username:docker}
	I1001 16:45:31.995080    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1001 16:45:32.002195    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 16:45:32.008860    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 16:45:32.015536    4927 provision.go:87] duration metric: took 128.605541ms to configureAuth
	I1001 16:45:32.015546    4927 buildroot.go:189] setting minikube options for container-runtime
	I1001 16:45:32.015651    4927 config.go:182] Loaded profile config "stopped-upgrade-342000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 16:45:32.015693    4927 main.go:141] libmachine: Using SSH client type: native
	I1001 16:45:32.015775    4927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f65c00] 0x102f68440 <nil>  [] 0s} localhost 50486 <nil> <nil>}
	I1001 16:45:32.015780    4927 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1001 16:45:32.065819    4927 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1001 16:45:32.065829    4927 buildroot.go:70] root file system type: tmpfs
	I1001 16:45:32.065882    4927 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1001 16:45:32.065935    4927 main.go:141] libmachine: Using SSH client type: native
	I1001 16:45:32.066047    4927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f65c00] 0x102f68440 <nil>  [] 0s} localhost 50486 <nil> <nil>}
	I1001 16:45:32.066080    4927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1001 16:45:32.120146    4927 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1001 16:45:32.120205    4927 main.go:141] libmachine: Using SSH client type: native
	I1001 16:45:32.120312    4927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f65c00] 0x102f68440 <nil>  [] 0s} localhost 50486 <nil> <nil>}
	I1001 16:45:32.120324    4927 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1001 16:45:32.467727    4927 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1001 16:45:32.467744    4927 machine.go:96] duration metric: took 788.510375ms to provisionDockerMachine
	I1001 16:45:32.467752    4927 start.go:293] postStartSetup for "stopped-upgrade-342000" (driver="qemu2")
	I1001 16:45:32.467758    4927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 16:45:32.467850    4927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 16:45:32.467864    4927 sshutil.go:53] new ssh client: &{IP:localhost Port:50486 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/id_rsa Username:docker}
	I1001 16:45:32.495725    4927 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 16:45:32.497046    4927 info.go:137] Remote host: Buildroot 2021.02.12
	I1001 16:45:32.497055    4927 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19740-1141/.minikube/addons for local assets ...
	I1001 16:45:32.497144    4927 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19740-1141/.minikube/files for local assets ...
	I1001 16:45:32.497273    4927 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/ssl/certs/16592.pem -> 16592.pem in /etc/ssl/certs
	I1001 16:45:32.497407    4927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 16:45:32.499984    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/ssl/certs/16592.pem --> /etc/ssl/certs/16592.pem (1708 bytes)
	I1001 16:45:32.507103    4927 start.go:296] duration metric: took 39.345709ms for postStartSetup
	I1001 16:45:32.507117    4927 fix.go:56] duration metric: took 20.546473334s for fixHost
	I1001 16:45:32.507153    4927 main.go:141] libmachine: Using SSH client type: native
	I1001 16:45:32.507260    4927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f65c00] 0x102f68440 <nil>  [] 0s} localhost 50486 <nil> <nil>}
	I1001 16:45:32.507266    4927 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1001 16:45:32.556401    4927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727826332.732110171
	
	I1001 16:45:32.556410    4927 fix.go:216] guest clock: 1727826332.732110171
	I1001 16:45:32.556414    4927 fix.go:229] Guest: 2024-10-01 16:45:32.732110171 -0700 PDT Remote: 2024-10-01 16:45:32.507119 -0700 PDT m=+20.667052084 (delta=224.991171ms)
	I1001 16:45:32.556434    4927 fix.go:200] guest clock delta is within tolerance: 224.991171ms
	I1001 16:45:32.556437    4927 start.go:83] releasing machines lock for "stopped-upgrade-342000", held for 20.595801042s
	I1001 16:45:32.556500    4927 ssh_runner.go:195] Run: cat /version.json
	I1001 16:45:32.556509    4927 sshutil.go:53] new ssh client: &{IP:localhost Port:50486 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/id_rsa Username:docker}
	I1001 16:45:32.556500    4927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 16:45:32.556540    4927 sshutil.go:53] new ssh client: &{IP:localhost Port:50486 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/id_rsa Username:docker}
	W1001 16:45:32.557028    4927 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50486: connect: connection refused
	I1001 16:45:32.557047    4927 retry.go:31] will retry after 265.718747ms: dial tcp [::1]:50486: connect: connection refused
	W1001 16:45:32.584925    4927 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1001 16:45:32.584966    4927 ssh_runner.go:195] Run: systemctl --version
	I1001 16:45:32.586741    4927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1001 16:45:32.588499    4927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1001 16:45:32.588529    4927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1001 16:45:32.591695    4927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1001 16:45:32.596437    4927 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1001 16:45:32.596444    4927 start.go:495] detecting cgroup driver to use...
	I1001 16:45:32.596530    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 16:45:32.602874    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1001 16:45:32.605925    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1001 16:45:32.608693    4927 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1001 16:45:32.608730    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1001 16:45:32.611990    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 16:45:32.615419    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1001 16:45:32.618543    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 16:45:32.621490    4927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 16:45:32.624226    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1001 16:45:32.627433    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1001 16:45:32.630670    4927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1001 16:45:32.633428    4927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 16:45:32.636157    4927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 16:45:32.639391    4927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:45:32.718158    4927 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1001 16:45:32.728559    4927 start.go:495] detecting cgroup driver to use...
	I1001 16:45:32.728658    4927 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1001 16:45:32.738429    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 16:45:32.742796    4927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1001 16:45:32.754939    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1001 16:45:32.760893    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1001 16:45:32.766798    4927 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1001 16:45:32.827092    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1001 16:45:32.835261    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 16:45:32.841370    4927 ssh_runner.go:195] Run: which cri-dockerd
	I1001 16:45:32.842637    4927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1001 16:45:32.845966    4927 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1001 16:45:32.851632    4927 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1001 16:45:32.933185    4927 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1001 16:45:33.012405    4927 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1001 16:45:33.012463    4927 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1001 16:45:33.017686    4927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:45:33.094016    4927 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1001 16:45:34.217088    4927 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.123065458s)
	I1001 16:45:34.217177    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1001 16:45:34.222196    4927 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1001 16:45:34.229466    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1001 16:45:34.234463    4927 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1001 16:45:34.314966    4927 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1001 16:45:34.393593    4927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:45:34.469713    4927 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1001 16:45:34.476423    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1001 16:45:34.481527    4927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:45:34.557501    4927 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1001 16:45:34.595987    4927 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1001 16:45:34.596087    4927 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1001 16:45:34.598602    4927 start.go:563] Will wait 60s for crictl version
	I1001 16:45:34.598663    4927 ssh_runner.go:195] Run: which crictl
	I1001 16:45:34.599965    4927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 16:45:34.615245    4927 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1001 16:45:34.615327    4927 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1001 16:45:34.630839    4927 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1001 16:45:34.651259    4927 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1001 16:45:34.651342    4927 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1001 16:45:34.652764    4927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 16:45:34.656421    4927 kubeadm.go:883] updating cluster {Name:stopped-upgrade-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50522 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1001 16:45:34.656482    4927 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1001 16:45:34.656535    4927 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1001 16:45:34.666725    4927 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1001 16:45:34.666734    4927 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1001 16:45:34.666790    4927 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1001 16:45:34.670139    4927 ssh_runner.go:195] Run: which lz4
	I1001 16:45:34.671431    4927 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1001 16:45:34.672621    4927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1001 16:45:34.672630    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1001 16:45:35.548178    4927 docker.go:649] duration metric: took 876.794ms to copy over tarball
	I1001 16:45:35.548250    4927 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1001 16:45:36.686843    4927 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.138590792s)
	I1001 16:45:36.686857    4927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1001 16:45:36.702706    4927 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1001 16:45:36.705715    4927 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1001 16:45:36.711057    4927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:45:36.776526    4927 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1001 16:45:38.273381    4927 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.496854583s)
	I1001 16:45:38.273486    4927 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1001 16:45:38.284683    4927 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1001 16:45:38.284695    4927 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1001 16:45:38.284700    4927 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1001 16:45:38.288760    4927 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:45:38.290493    4927 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 16:45:38.292572    4927 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 16:45:38.292738    4927 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:45:38.294477    4927 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 16:45:38.294612    4927 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 16:45:38.295634    4927 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 16:45:38.295695    4927 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 16:45:38.296968    4927 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1001 16:45:38.296986    4927 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 16:45:38.298085    4927 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1001 16:45:38.298175    4927 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 16:45:38.299464    4927 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 16:45:38.299610    4927 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1001 16:45:38.300571    4927 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1001 16:45:38.301997    4927 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	W1001 16:45:40.229921    4927 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1001 16:45:40.230697    4927 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1001 16:45:40.270976    4927 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1001 16:45:40.271032    4927 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1001 16:45:40.271167    4927 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1001 16:45:40.291262    4927 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1001 16:45:40.291417    4927 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1001 16:45:40.294128    4927 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1001 16:45:40.294155    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1001 16:45:40.318929    4927 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 16:45:40.339840    4927 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1001 16:45:40.339855    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1001 16:45:40.345749    4927 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1001 16:45:40.345783    4927 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 16:45:40.345852    4927 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1001 16:45:40.362482    4927 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1001 16:45:40.363795    4927 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1001 16:45:40.397833    4927 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1001 16:45:40.397866    4927 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1001 16:45:40.397869    4927 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1001 16:45:40.397887    4927 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1001 16:45:40.397930    4927 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1001 16:45:40.397941    4927 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1001 16:45:40.397947    4927 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1001 16:45:40.397977    4927 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1001 16:45:40.411967    4927 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1001 16:45:40.411977    4927 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1001 16:45:40.412105    4927 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1001 16:45:40.413537    4927 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1001 16:45:40.413550    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1001 16:45:40.420297    4927 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1001 16:45:40.420309    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1001 16:45:40.448700    4927 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1001 16:45:40.660521    4927 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1001 16:45:40.660725    4927 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:45:40.677739    4927 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1001 16:45:40.677770    4927 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:45:40.677851    4927 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:45:40.695116    4927 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1001 16:45:40.695259    4927 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1001 16:45:40.696942    4927 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1001 16:45:40.696954    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1001 16:45:40.724587    4927 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1001 16:45:40.724600    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1001 16:45:40.909103    4927 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1001 16:45:40.912793    4927 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1001 16:45:40.914576    4927 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1001 16:45:40.975740    4927 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1001 16:45:40.975782    4927 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1001 16:45:40.975800    4927 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1001 16:45:40.975805    4927 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1001 16:45:40.975814    4927 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1001 16:45:40.975875    4927 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1001 16:45:40.975875    4927 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1001 16:45:40.975935    4927 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1001 16:45:40.975951    4927 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1001 16:45:40.975985    4927 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1001 16:45:40.994739    4927 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1001 16:45:40.995101    4927 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1001 16:45:40.995112    4927 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1001 16:45:40.995134    4927 cache_images.go:92] duration metric: took 2.710455292s to LoadCachedImages
	W1001 16:45:40.995173    4927 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I1001 16:45:40.995179    4927 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1001 16:45:40.995235    4927 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-342000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 16:45:40.995304    4927 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1001 16:45:41.008342    4927 cni.go:84] Creating CNI manager for ""
	I1001 16:45:41.008354    4927 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:45:41.008362    4927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 16:45:41.008371    4927 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-342000 NodeName:stopped-upgrade-342000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 16:45:41.008438    4927 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-342000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 16:45:41.008508    4927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1001 16:45:41.011619    4927 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 16:45:41.011648    4927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 16:45:41.014770    4927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1001 16:45:41.019996    4927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 16:45:41.025060    4927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1001 16:45:41.030387    4927 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1001 16:45:41.031535    4927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 16:45:41.035548    4927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:45:41.117115    4927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 16:45:41.122676    4927 certs.go:68] Setting up /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000 for IP: 10.0.2.15
	I1001 16:45:41.122693    4927 certs.go:194] generating shared ca certs ...
	I1001 16:45:41.122702    4927 certs.go:226] acquiring lock for ca certs: {Name:mk74f46ad151665c6dd5cd39311b967c23e44dd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:45:41.122874    4927 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.key
	I1001 16:45:41.122924    4927 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/proxy-client-ca.key
	I1001 16:45:41.122931    4927 certs.go:256] generating profile certs ...
	I1001 16:45:41.123004    4927 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/client.key
	I1001 16:45:41.123021    4927 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.key.1a19673b
	I1001 16:45:41.123038    4927 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.crt.1a19673b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1001 16:45:41.197715    4927 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.crt.1a19673b ...
	I1001 16:45:41.197726    4927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.crt.1a19673b: {Name:mkf7b2bb4b2a9fc3a2ac37e52595639f961ffa70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:45:41.198038    4927 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.key.1a19673b ...
	I1001 16:45:41.198043    4927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.key.1a19673b: {Name:mkd560aa46ee4338eb0dc86c953bbc4e16a7d889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:45:41.198170    4927 certs.go:381] copying /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.crt.1a19673b -> /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.crt
	I1001 16:45:41.198370    4927 certs.go:385] copying /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.key.1a19673b -> /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.key
	I1001 16:45:41.198543    4927 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/proxy-client.key
	I1001 16:45:41.198672    4927 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/1659.pem (1338 bytes)
	W1001 16:45:41.198706    4927 certs.go:480] ignoring /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/1659_empty.pem, impossibly tiny 0 bytes
	I1001 16:45:41.198712    4927 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca-key.pem (1679 bytes)
	I1001 16:45:41.198739    4927 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem (1078 bytes)
	I1001 16:45:41.198764    4927 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem (1123 bytes)
	I1001 16:45:41.198788    4927 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/key.pem (1679 bytes)
	I1001 16:45:41.198839    4927 certs.go:484] found cert: /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/ssl/certs/16592.pem (1708 bytes)
	I1001 16:45:41.199210    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 16:45:41.206211    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1001 16:45:41.212546    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 16:45:41.219654    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1001 16:45:41.227050    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1001 16:45:41.234249    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 16:45:41.240839    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 16:45:41.247626    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1001 16:45:41.255103    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/1659.pem --> /usr/share/ca-certificates/1659.pem (1338 bytes)
	I1001 16:45:41.261920    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/ssl/certs/16592.pem --> /usr/share/ca-certificates/16592.pem (1708 bytes)
	I1001 16:45:41.268268    4927 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19740-1141/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 16:45:41.275343    4927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 16:45:41.280264    4927 ssh_runner.go:195] Run: openssl version
	I1001 16:45:41.282259    4927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 16:45:41.285150    4927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 16:45:41.286545    4927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I1001 16:45:41.286578    4927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 16:45:41.288409    4927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 16:45:41.291690    4927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1659.pem && ln -fs /usr/share/ca-certificates/1659.pem /etc/ssl/certs/1659.pem"
	I1001 16:45:41.294835    4927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1659.pem
	I1001 16:45:41.296132    4927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 23:04 /usr/share/ca-certificates/1659.pem
	I1001 16:45:41.296160    4927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1659.pem
	I1001 16:45:41.297872    4927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1659.pem /etc/ssl/certs/51391683.0"
	I1001 16:45:41.300581    4927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16592.pem && ln -fs /usr/share/ca-certificates/16592.pem /etc/ssl/certs/16592.pem"
	I1001 16:45:41.303869    4927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16592.pem
	I1001 16:45:41.305348    4927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 23:04 /usr/share/ca-certificates/16592.pem
	I1001 16:45:41.305374    4927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16592.pem
	I1001 16:45:41.307078    4927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16592.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 16:45:41.309933    4927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 16:45:41.311207    4927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 16:45:41.313199    4927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 16:45:41.314989    4927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 16:45:41.317042    4927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 16:45:41.318866    4927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 16:45:41.320610    4927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 16:45:41.322452    4927 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-342000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50522 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-342000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1001 16:45:41.322531    4927 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1001 16:45:41.333560    4927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 16:45:41.336765    4927 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 16:45:41.336776    4927 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1001 16:45:41.336806    4927 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 16:45:41.340708    4927 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 16:45:41.341018    4927 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-342000" does not appear in /Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:45:41.341113    4927 kubeconfig.go:62] /Users/jenkins/minikube-integration/19740-1141/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-342000" cluster setting kubeconfig missing "stopped-upgrade-342000" context setting]
	I1001 16:45:41.341284    4927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/kubeconfig: {Name:mk6821adb20f42e2e1842a7c6bcaf1ce77531dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:45:41.341722    4927 kapi.go:59] client config for stopped-upgrade-342000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/client.key", CAFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10453e5d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 16:45:41.342079    4927 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 16:45:41.344787    4927 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-342000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1001 16:45:41.344793    4927 kubeadm.go:1160] stopping kube-system containers ...
	I1001 16:45:41.344846    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1001 16:45:41.356416    4927 docker.go:483] Stopping containers: [4d26939b1517 b15f8da6832d b67cc8c69187 9f884cce7c0d da81e837a710 0e7521e8098a ef3ee586a96a b7ab46fee4d3]
	I1001 16:45:41.356490    4927 ssh_runner.go:195] Run: docker stop 4d26939b1517 b15f8da6832d b67cc8c69187 9f884cce7c0d da81e837a710 0e7521e8098a ef3ee586a96a b7ab46fee4d3
	I1001 16:45:41.368010    4927 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1001 16:45:41.374245    4927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 16:45:41.377103    4927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 16:45:41.377109    4927 kubeadm.go:157] found existing configuration files:
	
	I1001 16:45:41.377137    4927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/admin.conf
	I1001 16:45:41.379821    4927 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 16:45:41.379850    4927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 16:45:41.383044    4927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/kubelet.conf
	I1001 16:45:41.385917    4927 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 16:45:41.385944    4927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 16:45:41.388679    4927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/controller-manager.conf
	I1001 16:45:41.391514    4927 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 16:45:41.391540    4927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 16:45:41.394466    4927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/scheduler.conf
	I1001 16:45:41.396795    4927 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 16:45:41.396822    4927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 16:45:41.399687    4927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 16:45:41.402740    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:45:41.426556    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:45:42.238289    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:45:42.365162    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:45:42.389427    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1001 16:45:42.414528    4927 api_server.go:52] waiting for apiserver process to appear ...
	I1001 16:45:42.414620    4927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:45:42.916758    4927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:45:43.416668    4927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:45:43.421412    4927 api_server.go:72] duration metric: took 1.006895958s to wait for apiserver process to appear ...
	I1001 16:45:43.421423    4927 api_server.go:88] waiting for apiserver healthz status ...
	I1001 16:45:43.421443    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:45:48.423210    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:45:48.423308    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:45:53.424034    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:45:53.424068    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:45:58.424457    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:45:58.424525    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:03.425429    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:03.425472    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:08.426432    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:08.426482    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:13.427643    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:13.427701    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:18.429344    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:18.429425    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:23.430333    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:23.430429    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:28.433082    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:28.433170    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:33.435745    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:33.435792    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:38.438179    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:38.438245    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:43.440584    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:43.440762    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:46:43.452099    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:46:43.452194    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:46:43.462646    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:46:43.462733    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:46:43.472873    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:46:43.472965    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:46:43.488288    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:46:43.488374    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:46:43.498574    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:46:43.498666    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:46:43.509432    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:46:43.509516    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:46:43.519492    4927 logs.go:282] 0 containers: []
	W1001 16:46:43.519521    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:46:43.519591    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:46:43.530097    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:46:43.530116    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:46:43.530121    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:46:43.555391    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:46:43.555398    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:46:43.559339    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:46:43.559353    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:46:43.602513    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:46:43.602526    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:46:43.614382    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:46:43.614391    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:46:43.631130    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:46:43.631141    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:46:43.644092    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:46:43.644106    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:46:43.655145    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:46:43.655160    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:46:43.667053    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:46:43.667064    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:46:43.705550    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:46:43.705558    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:46:43.719082    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:46:43.719092    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:46:43.732871    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:46:43.732882    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:46:43.755300    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:46:43.755315    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:46:43.766788    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:46:43.766799    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:46:43.784984    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:46:43.784997    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:46:43.863703    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:46:43.863715    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:46:43.878651    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:46:43.878664    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:46:46.396409    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:51.399031    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:51.399188    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:46:51.414611    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:46:51.414712    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:46:51.427289    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:46:51.427383    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:46:51.439204    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:46:51.439289    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:46:51.450116    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:46:51.450203    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:46:51.460495    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:46:51.460576    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:46:51.470821    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:46:51.470918    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:46:51.481519    4927 logs.go:282] 0 containers: []
	W1001 16:46:51.481530    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:46:51.481601    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:46:51.493829    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:46:51.493847    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:46:51.493853    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:46:51.510507    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:46:51.510518    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:46:51.536646    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:46:51.536654    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:46:51.548020    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:46:51.548030    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:46:51.565791    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:46:51.565801    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:46:51.579412    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:46:51.579423    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:46:51.593792    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:46:51.593806    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:46:51.605952    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:46:51.605964    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:46:51.643338    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:46:51.643348    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:46:51.654806    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:46:51.654820    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:46:51.667132    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:46:51.667143    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:46:51.671341    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:46:51.671348    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:46:51.706432    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:46:51.706446    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:46:51.720421    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:46:51.720435    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:46:51.731465    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:46:51.731476    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:46:51.768892    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:46:51.768902    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:46:51.781597    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:46:51.781607    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:46:54.295133    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:46:59.297530    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:46:59.298051    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:46:59.331892    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:46:59.332055    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:46:59.350034    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:46:59.350152    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:46:59.364404    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:46:59.364483    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:46:59.375768    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:46:59.375859    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:46:59.386471    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:46:59.386551    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:46:59.398474    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:46:59.398556    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:46:59.408416    4927 logs.go:282] 0 containers: []
	W1001 16:46:59.408429    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:46:59.408501    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:46:59.424869    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:46:59.424888    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:46:59.424895    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:46:59.461982    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:46:59.461993    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:46:59.497715    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:46:59.497726    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:46:59.535428    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:46:59.535439    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:46:59.547411    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:46:59.547426    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:46:59.561927    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:46:59.561939    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:46:59.579558    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:46:59.579570    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:46:59.591024    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:46:59.591037    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:46:59.602474    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:46:59.602486    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:46:59.628323    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:46:59.628331    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:46:59.642071    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:46:59.642081    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:46:59.656504    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:46:59.656515    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:46:59.667751    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:46:59.667764    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:46:59.694465    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:46:59.694476    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:46:59.699018    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:46:59.699025    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:46:59.711070    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:46:59.711085    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:46:59.726386    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:46:59.726395    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:02.240530    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:07.242838    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:07.243010    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:07.257796    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:47:07.257893    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:07.270489    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:47:07.270570    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:07.283053    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:47:07.283136    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:07.294148    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:47:07.294239    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:07.305018    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:47:07.305099    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:07.316144    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:47:07.316225    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:07.326727    4927 logs.go:282] 0 containers: []
	W1001 16:47:07.326739    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:07.326812    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:07.337483    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:47:07.337504    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:07.337509    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:07.342308    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:07.342316    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:07.378258    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:47:07.378270    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:47:07.397395    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:47:07.397411    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:47:07.408614    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:07.408626    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:47:07.447658    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:47:07.447668    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:47:07.461548    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:47:07.461563    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:47:07.503955    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:47:07.503966    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:47:07.520458    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:47:07.520470    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:47:07.533095    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:47:07.533105    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:07.545283    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:47:07.545299    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:47:07.559499    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:47:07.559510    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:47:07.574570    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:47:07.574582    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:47:07.593645    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:47:07.593656    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:47:07.610868    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:47:07.610880    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:47:07.624666    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:47:07.624677    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:47:07.637004    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:07.637019    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:10.162793    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:15.165172    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:15.165662    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:15.196958    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:47:15.197115    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:15.221955    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:47:15.222067    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:15.235976    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:47:15.236067    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:15.247116    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:47:15.247199    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:15.257790    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:47:15.257864    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:15.268980    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:47:15.269066    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:15.279214    4927 logs.go:282] 0 containers: []
	W1001 16:47:15.279224    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:15.279289    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:15.290691    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:47:15.290708    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:47:15.290713    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:47:15.308306    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:47:15.308317    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:47:15.319940    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:15.319953    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:15.345025    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:47:15.345034    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:47:15.359556    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:15.359567    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:15.363775    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:15.363783    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:15.398138    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:47:15.398148    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:47:15.409429    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:15.409441    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:47:15.448416    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:47:15.448425    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:47:15.488011    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:47:15.488023    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:47:15.502573    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:47:15.502582    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:47:15.514645    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:47:15.514659    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:47:15.532696    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:47:15.532707    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:47:15.544713    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:47:15.544723    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:47:15.562076    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:47:15.562088    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:47:15.574015    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:47:15.574027    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:47:15.588322    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:47:15.588339    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:18.100774    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:23.103038    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:23.103245    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:23.132039    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:47:23.132153    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:23.147041    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:47:23.147147    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:23.161736    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:47:23.161818    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:23.172451    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:47:23.172542    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:23.182790    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:47:23.182871    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:23.193549    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:47:23.193622    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:23.203242    4927 logs.go:282] 0 containers: []
	W1001 16:47:23.203256    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:23.203328    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:23.218584    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:47:23.218604    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:47:23.218610    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:47:23.232744    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:47:23.232758    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:47:23.243423    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:47:23.243435    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:47:23.259631    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:47:23.259648    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:47:23.272004    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:23.272014    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:23.276088    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:47:23.276094    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:47:23.290472    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:47:23.290486    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:47:23.304524    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:47:23.304537    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:47:23.316451    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:47:23.316468    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:47:23.328787    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:23.328798    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:23.354615    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:47:23.354623    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:23.366578    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:47:23.366589    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:47:23.386518    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:47:23.386534    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:47:23.398708    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:23.398719    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:47:23.437944    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:23.437955    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:23.473467    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:47:23.473481    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:47:23.511152    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:47:23.511170    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:47:26.023753    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:31.024599    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:31.025108    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:31.058150    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:47:31.058309    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:31.078026    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:47:31.078148    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:31.092295    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:47:31.092388    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:31.104076    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:47:31.104163    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:31.114997    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:47:31.115079    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:31.133466    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:47:31.133554    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:31.143519    4927 logs.go:282] 0 containers: []
	W1001 16:47:31.143531    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:31.143601    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:31.155427    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:47:31.155445    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:47:31.155450    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:31.168221    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:47:31.168234    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:47:31.183267    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:47:31.183280    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:47:31.196257    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:47:31.196269    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:47:31.207597    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:47:31.207608    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:47:31.220486    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:47:31.220499    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:47:31.234312    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:47:31.234322    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:47:31.246456    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:31.246467    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:31.271193    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:31.271202    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:31.275332    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:31.275338    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:31.310620    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:47:31.310631    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:47:31.322559    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:47:31.322571    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:47:31.340181    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:47:31.340197    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:47:31.362740    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:31.362751    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:47:31.399637    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:47:31.399649    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:47:31.437696    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:47:31.437709    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:47:31.452820    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:47:31.452830    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:47:33.966601    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:38.969000    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:38.969401    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:39.006841    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:47:39.007010    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:39.026022    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:47:39.026140    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:39.039936    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:47:39.040032    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:39.051948    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:47:39.052023    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:39.062597    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:47:39.062680    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:39.073659    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:47:39.073742    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:39.085434    4927 logs.go:282] 0 containers: []
	W1001 16:47:39.085446    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:39.085518    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:39.095909    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:47:39.095933    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:39.095938    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:39.100080    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:47:39.100088    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:47:39.113688    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:47:39.113700    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:47:39.151881    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:47:39.151892    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:47:39.166123    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:47:39.166134    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:47:39.177567    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:47:39.177579    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:47:39.194199    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:47:39.194211    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:47:39.211732    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:47:39.211748    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:47:39.224357    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:39.224368    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:39.248979    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:39.249009    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:47:39.286155    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:47:39.286168    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:47:39.299896    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:47:39.299908    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:47:39.311154    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:47:39.311167    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:47:39.327502    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:39.327518    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:39.362170    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:47:39.362186    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:47:39.374172    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:47:39.374183    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:47:39.385182    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:47:39.385193    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:41.899260    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:46.901607    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:46.901895    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:46.927883    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:47:46.928011    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:46.944787    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:47:46.944897    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:46.957866    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:47:46.957958    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:46.970095    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:47:46.970182    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:46.980558    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:47:46.980632    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:46.990877    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:47:46.990948    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:47.000893    4927 logs.go:282] 0 containers: []
	W1001 16:47:47.000904    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:47.000977    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:47.012298    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:47:47.012317    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:47:47.012323    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:47:47.023909    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:47.023922    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:47.048120    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:47.048127    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:47.086237    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:47:47.086249    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:47:47.097455    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:47:47.097470    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:47:47.109227    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:47:47.109243    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:47:47.124045    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:47:47.124056    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:47:47.142081    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:47:47.142092    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:47:47.154901    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:47.154912    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:47.159255    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:47:47.159263    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:47:47.195827    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:47:47.195838    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:47:47.212848    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:47:47.212865    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:47:47.226522    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:47:47.226533    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:47.238645    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:47.238654    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:47:47.276330    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:47:47.276347    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:47:47.288040    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:47:47.288058    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:47:47.304822    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:47:47.304833    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:47:49.825019    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:47:54.827289    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:47:54.827402    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:47:54.838923    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:47:54.839012    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:47:54.850947    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:47:54.851033    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:47:54.862431    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:47:54.862519    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:47:54.875922    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:47:54.876015    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:47:54.887569    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:47:54.887655    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:47:54.900882    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:47:54.900981    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:47:54.913533    4927 logs.go:282] 0 containers: []
	W1001 16:47:54.913546    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:47:54.913674    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:47:54.925643    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:47:54.925664    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:47:54.925669    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:47:54.941335    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:47:54.941347    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:47:54.961339    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:47:54.961353    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:47:54.973192    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:47:54.973206    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:47:54.986499    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:47:54.986514    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:47:55.003113    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:47:55.003135    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:47:55.016808    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:47:55.016820    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:47:55.042877    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:47:55.042902    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:47:55.080442    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:47:55.080454    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:47:55.119640    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:47:55.119652    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:47:55.133470    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:47:55.133486    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:47:55.147239    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:47:55.147254    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:47:55.163710    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:47:55.163723    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:47:55.175742    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:47:55.175754    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:47:55.214468    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:47:55.214484    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:47:55.219944    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:47:55.219953    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:47:55.236930    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:47:55.236946    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:47:57.756639    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:02.758826    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:02.759036    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:48:02.775953    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:48:02.776056    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:48:02.789032    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:48:02.789124    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:48:02.802925    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:48:02.803011    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:48:02.813266    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:48:02.813357    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:48:02.828211    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:48:02.828302    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:48:02.838396    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:48:02.838471    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:48:02.853635    4927 logs.go:282] 0 containers: []
	W1001 16:48:02.853648    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:48:02.853724    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:48:02.863949    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:48:02.863966    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:48:02.863971    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:48:02.868362    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:48:02.868369    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:48:02.883143    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:48:02.883153    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:48:02.894795    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:48:02.894806    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:48:02.929686    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:48:02.929696    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:48:02.943906    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:48:02.943917    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:48:02.960436    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:48:02.960445    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:48:02.973747    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:48:02.973761    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:48:02.986095    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:48:02.986108    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:48:03.024342    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:48:03.024352    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:48:03.043423    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:48:03.043433    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:48:03.054983    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:48:03.054995    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:48:03.091658    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:48:03.091668    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:48:03.105257    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:48:03.105268    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:48:03.117747    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:48:03.117759    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:48:03.129489    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:48:03.129500    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:48:03.140722    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:48:03.140733    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:48:05.667531    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:10.669771    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:10.669905    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:48:10.681615    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:48:10.681703    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:48:10.692189    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:48:10.692274    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:48:10.702934    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:48:10.703013    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:48:10.715499    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:48:10.715580    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:48:10.729393    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:48:10.729475    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:48:10.740008    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:48:10.740086    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:48:10.750422    4927 logs.go:282] 0 containers: []
	W1001 16:48:10.750436    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:48:10.750515    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:48:10.761381    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:48:10.761401    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:48:10.761407    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:48:10.777049    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:48:10.777060    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:48:10.789405    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:48:10.789417    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:48:10.827169    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:48:10.827181    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:48:10.865088    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:48:10.865105    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:48:10.877514    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:48:10.877524    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:48:10.894288    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:48:10.894303    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:48:10.912220    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:48:10.912231    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:48:10.923431    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:48:10.923442    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:48:10.946913    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:48:10.946927    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:48:10.984136    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:48:10.984148    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:48:10.998842    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:48:10.998852    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:48:11.016375    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:48:11.016390    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:48:11.029953    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:48:11.029964    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:48:11.041126    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:48:11.041137    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:48:11.045841    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:48:11.045848    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:48:11.057788    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:48:11.057804    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:48:13.572788    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:18.574951    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:18.575182    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:48:18.597322    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:48:18.597447    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:48:18.613364    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:48:18.613468    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:48:18.627060    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:48:18.627143    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:48:18.638969    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:48:18.639058    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:48:18.649381    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:48:18.649461    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:48:18.661214    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:48:18.661294    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:48:18.674061    4927 logs.go:282] 0 containers: []
	W1001 16:48:18.674073    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:48:18.674147    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:48:18.685860    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:48:18.685877    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:48:18.685882    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:48:18.725004    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:48:18.725012    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:48:18.739025    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:48:18.739035    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:48:18.756310    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:48:18.756323    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:48:18.769123    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:48:18.769136    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:48:18.779892    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:48:18.779905    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:48:18.791077    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:48:18.791088    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:48:18.808173    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:48:18.808187    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:48:18.832498    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:48:18.832506    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:48:18.837148    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:48:18.837156    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:48:18.876085    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:48:18.876100    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:48:18.893138    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:48:18.893155    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:48:18.904505    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:48:18.904518    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:48:18.916658    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:48:18.916669    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:48:18.958406    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:48:18.958422    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:48:18.973353    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:48:18.973364    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:48:18.989885    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:48:18.989894    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:48:21.503654    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:26.505933    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:26.506042    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:48:26.516919    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:48:26.517008    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:48:26.527639    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:48:26.527717    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:48:26.544066    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:48:26.544151    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:48:26.554974    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:48:26.555066    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:48:26.565765    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:48:26.565848    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:48:26.576030    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:48:26.576115    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:48:26.587442    4927 logs.go:282] 0 containers: []
	W1001 16:48:26.587458    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:48:26.587532    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:48:26.598536    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:48:26.598558    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:48:26.598563    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:48:26.612482    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:48:26.612496    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:48:26.650389    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:48:26.650400    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:48:26.671512    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:48:26.671521    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:48:26.685583    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:48:26.685593    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:48:26.708140    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:48:26.708148    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:48:26.719533    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:48:26.719546    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:48:26.732820    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:48:26.732834    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:48:26.743619    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:48:26.743632    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:48:26.780567    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:48:26.780577    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:48:26.798429    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:48:26.798439    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:48:26.812842    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:48:26.812854    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:48:26.825040    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:48:26.825051    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:48:26.841646    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:48:26.841659    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:48:26.853937    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:48:26.853948    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:48:26.865278    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:48:26.865290    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:48:26.901241    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:48:26.901249    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:48:29.406105    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:34.408721    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:34.408997    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:48:34.429020    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:48:34.429132    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:48:34.443127    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:48:34.443214    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:48:34.455486    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:48:34.455574    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:48:34.466391    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:48:34.466473    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:48:34.476776    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:48:34.476863    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:48:34.487091    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:48:34.487177    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:48:34.497338    4927 logs.go:282] 0 containers: []
	W1001 16:48:34.497352    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:48:34.497425    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:48:34.507735    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:48:34.507777    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:48:34.507783    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:48:34.522883    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:48:34.522898    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:48:34.540644    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:48:34.540654    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:48:34.552350    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:48:34.552359    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:48:34.564567    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:48:34.564579    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:48:34.568800    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:48:34.568805    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:48:34.580014    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:48:34.580025    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:48:34.616037    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:48:34.616045    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:48:34.631048    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:48:34.631063    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:48:34.642446    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:48:34.642456    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:48:34.659000    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:48:34.659010    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:48:34.670829    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:48:34.670843    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:48:34.694126    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:48:34.694136    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:48:34.730079    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:48:34.730092    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:48:34.744515    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:48:34.744527    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:48:34.781989    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:48:34.782003    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:48:34.793907    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:48:34.793918    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:48:37.308770    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:42.311319    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:42.311505    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:48:42.325454    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:48:42.325551    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:48:42.336099    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:48:42.336193    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:48:42.346278    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:48:42.346355    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:48:42.357231    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:48:42.357319    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:48:42.367722    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:48:42.367804    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:48:42.378352    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:48:42.378430    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:48:42.388928    4927 logs.go:282] 0 containers: []
	W1001 16:48:42.388938    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:48:42.389012    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:48:42.399036    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:48:42.399054    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:48:42.399060    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:48:42.410497    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:48:42.410509    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:48:42.450729    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:48:42.450740    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:48:42.462362    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:48:42.462374    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:48:42.474958    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:48:42.474972    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:48:42.499272    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:48:42.499285    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:48:42.517246    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:48:42.517257    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:48:42.533595    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:48:42.533606    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:48:42.548285    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:48:42.548297    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:48:42.561649    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:48:42.561660    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:48:42.576362    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:48:42.576375    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:48:42.587807    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:48:42.587820    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:48:42.605663    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:48:42.605672    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:48:42.612036    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:48:42.612046    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:48:42.652831    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:48:42.652842    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:48:42.690570    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:48:42.690582    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:48:42.703084    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:48:42.703098    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:48:45.220475    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:50.222920    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:50.223138    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:48:50.244084    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:48:50.244190    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:48:50.256562    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:48:50.256655    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:48:50.267244    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:48:50.267330    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:48:50.278656    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:48:50.278742    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:48:50.289913    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:48:50.290000    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:48:50.301244    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:48:50.301327    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:48:50.311444    4927 logs.go:282] 0 containers: []
	W1001 16:48:50.311455    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:48:50.311523    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:48:50.322350    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:48:50.322371    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:48:50.322376    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:48:50.359786    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:48:50.359798    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:48:50.378916    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:48:50.378931    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:48:50.392051    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:48:50.392067    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:48:50.405146    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:48:50.405157    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:48:50.417057    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:48:50.417069    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:48:50.433373    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:48:50.433387    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:48:50.445494    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:48:50.445512    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:48:50.450141    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:48:50.450147    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:48:50.491843    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:48:50.491858    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:48:50.506466    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:48:50.506483    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:48:50.527380    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:48:50.527391    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:48:50.540227    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:48:50.540240    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:48:50.564238    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:48:50.564245    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:48:50.603400    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:48:50.603408    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:48:50.621567    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:48:50.621585    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:48:50.638985    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:48:50.639000    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:48:53.150714    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:48:58.153062    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:48:58.153327    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:48:58.177168    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:48:58.177290    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:48:58.193409    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:48:58.193507    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:48:58.205474    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:48:58.205565    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:48:58.218708    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:48:58.218792    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:48:58.229498    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:48:58.229583    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:48:58.240517    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:48:58.240599    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:48:58.250288    4927 logs.go:282] 0 containers: []
	W1001 16:48:58.250297    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:48:58.250359    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:48:58.264882    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:48:58.264901    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:48:58.264906    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:48:58.277484    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:48:58.277495    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:48:58.294873    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:48:58.294884    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:48:58.306330    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:48:58.306341    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:48:58.329499    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:48:58.329509    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:48:58.333924    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:48:58.333933    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:48:58.348719    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:48:58.348729    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:48:58.360085    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:48:58.360097    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:48:58.375673    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:48:58.375685    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:48:58.387554    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:48:58.387566    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:48:58.422614    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:48:58.422626    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:48:58.461719    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:48:58.461729    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:48:58.481220    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:48:58.481235    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:48:58.492427    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:48:58.492438    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:48:58.506522    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:48:58.506533    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:48:58.518497    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:48:58.518510    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:48:58.557197    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:48:58.557204    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:49:01.073024    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:06.075475    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:06.075966    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:06.109803    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:49:06.109972    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:06.130509    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:49:06.130635    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:06.145551    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:49:06.145649    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:06.158778    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:49:06.158865    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:06.170415    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:49:06.170493    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:06.181059    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:49:06.181142    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:06.191653    4927 logs.go:282] 0 containers: []
	W1001 16:49:06.191666    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:06.191741    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:06.205605    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:49:06.205628    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:06.205634    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:06.210659    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:49:06.210666    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:49:06.244327    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:49:06.244345    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:49:06.271485    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:49:06.271501    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:49:06.288051    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:06.288067    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:49:06.325754    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:49:06.325762    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:06.337418    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:06.337431    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:06.375080    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:49:06.375092    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:49:06.388642    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:49:06.388658    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:49:06.402087    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:06.402100    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:06.426305    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:49:06.426318    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:49:06.443926    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:49:06.443937    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:49:06.456506    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:49:06.456518    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:49:06.474863    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:49:06.474875    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:49:06.512095    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:49:06.512107    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:49:06.526761    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:49:06.526775    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:49:06.542940    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:49:06.542953    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:49:09.057128    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:14.059675    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:14.059921    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:14.080887    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:49:14.081000    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:14.095371    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:49:14.095456    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:14.108063    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:49:14.108142    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:14.118393    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:49:14.118464    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:14.128710    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:49:14.128792    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:14.139281    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:49:14.139357    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:14.149466    4927 logs.go:282] 0 containers: []
	W1001 16:49:14.149476    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:14.149542    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:14.159775    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:49:14.159793    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:14.159798    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:14.164072    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:49:14.164080    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:49:14.177801    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:49:14.177815    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:49:14.191956    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:49:14.191967    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:49:14.207828    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:49:14.207843    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:49:14.229549    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:14.229561    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:49:14.267925    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:14.267934    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:14.310413    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:49:14.310424    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:49:14.329270    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:49:14.329286    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:49:14.345209    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:49:14.345225    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:49:14.356707    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:14.356722    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:14.378127    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:49:14.378134    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:49:14.416521    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:49:14.416534    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:49:14.428310    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:49:14.428323    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:49:14.446596    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:49:14.446611    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:14.458660    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:49:14.458668    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:49:14.471724    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:49:14.471733    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:49:16.984883    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:21.987242    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:21.987564    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:22.018285    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:49:22.018432    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:22.037504    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:49:22.037611    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:22.051566    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:49:22.051665    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:22.062762    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:49:22.062853    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:22.073222    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:49:22.073308    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:22.083993    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:49:22.084076    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:22.094486    4927 logs.go:282] 0 containers: []
	W1001 16:49:22.094505    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:22.094573    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:22.105791    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:49:22.105807    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:49:22.105813    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:49:22.117409    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:22.117424    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:22.141020    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:49:22.141031    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:22.152738    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:22.152750    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:22.186954    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:49:22.186969    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:49:22.201573    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:49:22.201586    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:49:22.213876    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:49:22.213887    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:49:22.231594    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:49:22.231613    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:49:22.246804    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:49:22.246818    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:49:22.258167    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:49:22.258184    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:49:22.269498    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:49:22.269512    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:49:22.282528    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:22.282542    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:49:22.320373    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:49:22.320387    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:49:22.358582    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:49:22.358594    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:49:22.373432    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:49:22.373448    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:49:22.391132    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:22.391141    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:22.395385    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:49:22.395393    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:49:24.912109    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:29.914503    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:29.914986    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:29.950324    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:49:29.950478    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:29.969666    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:49:29.969788    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:29.983541    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:49:29.983630    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:29.995336    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:49:29.995432    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:30.006030    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:49:30.006122    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:30.017329    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:49:30.017415    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:30.027354    4927 logs.go:282] 0 containers: []
	W1001 16:49:30.027368    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:30.027436    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:30.038263    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:49:30.038282    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:30.038288    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:49:30.075283    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:49:30.075297    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:49:30.093162    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:49:30.093173    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:49:30.107180    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:49:30.107192    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:49:30.119286    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:49:30.119297    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:49:30.132333    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:30.132347    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:30.154200    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:49:30.154210    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:30.165964    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:30.165974    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:30.170712    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:49:30.170720    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:49:30.209632    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:49:30.209647    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:49:30.223102    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:49:30.223116    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:49:30.241809    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:49:30.241822    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:49:30.258217    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:49:30.258228    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:49:30.269631    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:30.269644    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:30.304503    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:49:30.304518    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:49:30.319770    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:49:30.319787    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:49:30.331240    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:49:30.331252    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:49:32.845153    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:37.847502    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:37.847783    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:49:37.867465    4927 logs.go:282] 2 containers: [81fb7ac4ff5c 4d26939b1517]
	I1001 16:49:37.867586    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:49:37.881710    4927 logs.go:282] 2 containers: [d0c8dc30f5b0 da81e837a710]
	I1001 16:49:37.881810    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:49:37.895338    4927 logs.go:282] 1 containers: [cc23c1d7064e]
	I1001 16:49:37.895427    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:49:37.906449    4927 logs.go:282] 2 containers: [6faf9f02cc89 b67cc8c69187]
	I1001 16:49:37.906536    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:49:37.922838    4927 logs.go:282] 1 containers: [4df3610170e0]
	I1001 16:49:37.922919    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:49:37.933581    4927 logs.go:282] 2 containers: [d2f56fe9bf73 0e7521e8098a]
	I1001 16:49:37.933672    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:49:37.944096    4927 logs.go:282] 0 containers: []
	W1001 16:49:37.944109    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:49:37.944187    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:49:37.956003    4927 logs.go:282] 2 containers: [a0ad2fd61a78 ab389a88a6c0]
	I1001 16:49:37.956022    4927 logs.go:123] Gathering logs for kube-controller-manager [d2f56fe9bf73] ...
	I1001 16:49:37.956028    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2f56fe9bf73"
	I1001 16:49:37.973381    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:49:37.973394    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:49:37.985722    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:49:37.985733    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:49:38.020854    4927 logs.go:123] Gathering logs for kube-apiserver [81fb7ac4ff5c] ...
	I1001 16:49:38.020866    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81fb7ac4ff5c"
	I1001 16:49:38.034777    4927 logs.go:123] Gathering logs for kube-apiserver [4d26939b1517] ...
	I1001 16:49:38.034788    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d26939b1517"
	I1001 16:49:38.073434    4927 logs.go:123] Gathering logs for etcd [da81e837a710] ...
	I1001 16:49:38.073445    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da81e837a710"
	I1001 16:49:38.092863    4927 logs.go:123] Gathering logs for storage-provisioner [a0ad2fd61a78] ...
	I1001 16:49:38.092874    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a0ad2fd61a78"
	I1001 16:49:38.105142    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:49:38.105154    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:49:38.142001    4927 logs.go:123] Gathering logs for kube-scheduler [6faf9f02cc89] ...
	I1001 16:49:38.142012    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6faf9f02cc89"
	I1001 16:49:38.153970    4927 logs.go:123] Gathering logs for kube-scheduler [b67cc8c69187] ...
	I1001 16:49:38.153981    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b67cc8c69187"
	I1001 16:49:38.170225    4927 logs.go:123] Gathering logs for kube-controller-manager [0e7521e8098a] ...
	I1001 16:49:38.170239    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7521e8098a"
	I1001 16:49:38.183076    4927 logs.go:123] Gathering logs for kube-proxy [4df3610170e0] ...
	I1001 16:49:38.183086    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4df3610170e0"
	I1001 16:49:38.197151    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:49:38.197161    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:49:38.220562    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:49:38.220572    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:49:38.224567    4927 logs.go:123] Gathering logs for etcd [d0c8dc30f5b0] ...
	I1001 16:49:38.224576    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0c8dc30f5b0"
	I1001 16:49:38.238518    4927 logs.go:123] Gathering logs for coredns [cc23c1d7064e] ...
	I1001 16:49:38.238534    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc23c1d7064e"
	I1001 16:49:38.253360    4927 logs.go:123] Gathering logs for storage-provisioner [ab389a88a6c0] ...
	I1001 16:49:38.253373    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ab389a88a6c0"
	I1001 16:49:40.767082    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:45.769347    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:45.769499    4927 kubeadm.go:597] duration metric: took 4m4.435217583s to restartPrimaryControlPlane
	W1001 16:49:45.769652    4927 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1001 16:49:45.769714    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1001 16:49:46.863310    4927 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.093588541s)
	I1001 16:49:46.863373    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 16:49:46.868576    4927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 16:49:46.871868    4927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 16:49:46.875042    4927 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 16:49:46.875049    4927 kubeadm.go:157] found existing configuration files:
	
	I1001 16:49:46.875089    4927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/admin.conf
	I1001 16:49:46.877896    4927 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 16:49:46.877951    4927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 16:49:46.881211    4927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/kubelet.conf
	I1001 16:49:46.884096    4927 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 16:49:46.884131    4927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 16:49:46.886921    4927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/controller-manager.conf
	I1001 16:49:46.889921    4927 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 16:49:46.889969    4927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 16:49:46.893215    4927 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/scheduler.conf
	I1001 16:49:46.896205    4927 kubeadm.go:163] "https://control-plane.minikube.internal:50522" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50522 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 16:49:46.896251    4927 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 16:49:46.899107    4927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1001 16:49:46.980874    4927 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 16:49:53.516864    4927 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1001 16:49:53.516893    4927 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 16:49:53.516928    4927 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 16:49:53.516973    4927 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 16:49:53.517023    4927 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1001 16:49:53.517055    4927 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 16:49:53.520148    4927 out.go:235]   - Generating certificates and keys ...
	I1001 16:49:53.520188    4927 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 16:49:53.520223    4927 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 16:49:53.520268    4927 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1001 16:49:53.520305    4927 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1001 16:49:53.520354    4927 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1001 16:49:53.520381    4927 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1001 16:49:53.520413    4927 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1001 16:49:53.520454    4927 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1001 16:49:53.520499    4927 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1001 16:49:53.520538    4927 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1001 16:49:53.520561    4927 kubeadm.go:310] [certs] Using the existing "sa" key
	I1001 16:49:53.520591    4927 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 16:49:53.520624    4927 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 16:49:53.520656    4927 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 16:49:53.520686    4927 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 16:49:53.520711    4927 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 16:49:53.520759    4927 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 16:49:53.520799    4927 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 16:49:53.520823    4927 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 16:49:53.520863    4927 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 16:49:53.531184    4927 out.go:235]   - Booting up control plane ...
	I1001 16:49:53.531229    4927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 16:49:53.531266    4927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 16:49:53.531317    4927 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 16:49:53.531365    4927 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 16:49:53.531450    4927 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1001 16:49:53.531501    4927 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503811 seconds
	I1001 16:49:53.531564    4927 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 16:49:53.531627    4927 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 16:49:53.531658    4927 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 16:49:53.531755    4927 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-342000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 16:49:53.531786    4927 kubeadm.go:310] [bootstrap-token] Using token: f5f5sl.u9431kvc7hveohtv
	I1001 16:49:53.535254    4927 out.go:235]   - Configuring RBAC rules ...
	I1001 16:49:53.535306    4927 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 16:49:53.535352    4927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 16:49:53.535427    4927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 16:49:53.535500    4927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 16:49:53.535563    4927 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 16:49:53.535608    4927 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 16:49:53.535671    4927 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 16:49:53.535701    4927 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 16:49:53.535734    4927 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 16:49:53.535738    4927 kubeadm.go:310] 
	I1001 16:49:53.535770    4927 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 16:49:53.535775    4927 kubeadm.go:310] 
	I1001 16:49:53.535814    4927 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 16:49:53.535817    4927 kubeadm.go:310] 
	I1001 16:49:53.535831    4927 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 16:49:53.535867    4927 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 16:49:53.535895    4927 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 16:49:53.535899    4927 kubeadm.go:310] 
	I1001 16:49:53.535926    4927 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 16:49:53.535929    4927 kubeadm.go:310] 
	I1001 16:49:53.535963    4927 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 16:49:53.535967    4927 kubeadm.go:310] 
	I1001 16:49:53.535993    4927 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 16:49:53.536038    4927 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 16:49:53.536080    4927 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 16:49:53.536085    4927 kubeadm.go:310] 
	I1001 16:49:53.536131    4927 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 16:49:53.536171    4927 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 16:49:53.536174    4927 kubeadm.go:310] 
	I1001 16:49:53.536216    4927 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f5f5sl.u9431kvc7hveohtv \
	I1001 16:49:53.536274    4927 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7410ba584d1420d22d17a85d1568f395de246b7fddabe3e224321915d0b92005 \
	I1001 16:49:53.536287    4927 kubeadm.go:310] 	--control-plane 
	I1001 16:49:53.536291    4927 kubeadm.go:310] 
	I1001 16:49:53.536342    4927 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 16:49:53.536346    4927 kubeadm.go:310] 
	I1001 16:49:53.536390    4927 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f5f5sl.u9431kvc7hveohtv \
	I1001 16:49:53.536444    4927 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7410ba584d1420d22d17a85d1568f395de246b7fddabe3e224321915d0b92005 
	I1001 16:49:53.536450    4927 cni.go:84] Creating CNI manager for ""
	I1001 16:49:53.536458    4927 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:49:53.546139    4927 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1001 16:49:53.550217    4927 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1001 16:49:53.553359    4927 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1001 16:49:53.558084    4927 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 16:49:53.558131    4927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 16:49:53.558153    4927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-342000 minikube.k8s.io/updated_at=2024_10_01T16_49_53_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f51fafbf3cd47f7a462449fed4417351dbbcecb3 minikube.k8s.io/name=stopped-upgrade-342000 minikube.k8s.io/primary=true
	I1001 16:49:53.602884    4927 ops.go:34] apiserver oom_adj: -16
	I1001 16:49:53.602881    4927 kubeadm.go:1113] duration metric: took 44.785833ms to wait for elevateKubeSystemPrivileges
	I1001 16:49:53.602902    4927 kubeadm.go:394] duration metric: took 4m12.283040709s to StartCluster
	I1001 16:49:53.602913    4927 settings.go:142] acquiring lock: {Name:mkd0df72d236cca9ab7a62ebb6aa022c207aaa93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:49:53.603005    4927 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:49:53.603434    4927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/kubeconfig: {Name:mk6821adb20f42e2e1842a7c6bcaf1ce77531dcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:49:53.603642    4927 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:49:53.603734    4927 config.go:182] Loaded profile config "stopped-upgrade-342000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 16:49:53.603682    4927 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 16:49:53.603771    4927 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-342000"
	I1001 16:49:53.603780    4927 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-342000"
	W1001 16:49:53.603785    4927 addons.go:243] addon storage-provisioner should already be in state true
	I1001 16:49:53.603781    4927 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-342000"
	I1001 16:49:53.603798    4927 host.go:66] Checking if "stopped-upgrade-342000" exists ...
	I1001 16:49:53.603802    4927 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-342000"
	I1001 16:49:53.604734    4927 kapi.go:59] client config for stopped-upgrade-342000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/stopped-upgrade-342000/client.key", CAFile:"/Users/jenkins/minikube-integration/19740-1141/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10453e5d0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1001 16:49:53.604855    4927 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-342000"
	W1001 16:49:53.604859    4927 addons.go:243] addon default-storageclass should already be in state true
	I1001 16:49:53.604866    4927 host.go:66] Checking if "stopped-upgrade-342000" exists ...
	I1001 16:49:53.607156    4927 out.go:177] * Verifying Kubernetes components...
	I1001 16:49:53.607495    4927 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 16:49:53.611398    4927 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 16:49:53.611406    4927 sshutil.go:53] new ssh client: &{IP:localhost Port:50486 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/id_rsa Username:docker}
	I1001 16:49:53.615140    4927 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 16:49:53.618237    4927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 16:49:53.622244    4927 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 16:49:53.622250    4927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 16:49:53.622255    4927 sshutil.go:53] new ssh client: &{IP:localhost Port:50486 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/stopped-upgrade-342000/id_rsa Username:docker}
	I1001 16:49:53.708369    4927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 16:49:53.713457    4927 api_server.go:52] waiting for apiserver process to appear ...
	I1001 16:49:53.713510    4927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 16:49:53.716965    4927 api_server.go:72] duration metric: took 113.311208ms to wait for apiserver process to appear ...
	I1001 16:49:53.716973    4927 api_server.go:88] waiting for apiserver healthz status ...
	I1001 16:49:53.716980    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:49:53.729975    4927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 16:49:53.797396    4927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 16:49:54.081190    4927 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1001 16:49:54.081203    4927 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1001 16:49:58.719009    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:49:58.719038    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:03.719207    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:03.719229    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:08.719474    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:08.719502    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:13.720044    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:13.720067    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:18.720557    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:18.720614    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:23.721288    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:23.721340    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1001 16:50:24.082733    4927 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1001 16:50:24.087046    4927 out.go:177] * Enabled addons: storage-provisioner
	I1001 16:50:24.094944    4927 addons.go:510] duration metric: took 30.491580125s for enable addons: enabled=[storage-provisioner]
	I1001 16:50:28.722207    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:28.722228    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:33.723275    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:33.723316    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:38.724738    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:38.724786    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:43.726569    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:43.726609    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:48.726878    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:48.726902    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:50:53.729058    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:50:53.729228    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:50:53.740826    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:50:53.740902    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:50:53.755917    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:50:53.755990    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:50:53.766618    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:50:53.766705    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:50:53.777229    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:50:53.777311    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:50:53.787952    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:50:53.788035    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:50:53.798129    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:50:53.798212    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:50:53.807839    4927 logs.go:282] 0 containers: []
	W1001 16:50:53.807853    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:50:53.807923    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:50:53.818203    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:50:53.818221    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:50:53.818227    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:50:53.822934    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:50:53.822941    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:50:53.856624    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:50:53.856640    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:50:53.876357    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:50:53.876372    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:50:53.900582    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:50:53.900591    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:50:53.935276    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:50:53.935286    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:50:53.950041    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:50:53.950052    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:50:53.964180    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:50:53.964190    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:50:53.975900    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:50:53.975916    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:50:53.987265    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:50:53.987276    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:50:54.001727    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:50:54.001736    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:50:54.013597    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:50:54.013609    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:50:54.030596    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:50:54.030606    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:50:56.545188    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:01.547483    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:01.547655    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:01.558992    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:51:01.559075    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:01.569626    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:51:01.569710    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:01.580500    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:51:01.580585    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:01.590403    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:51:01.590484    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:01.602135    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:51:01.602232    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:01.612346    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:51:01.612427    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:01.622087    4927 logs.go:282] 0 containers: []
	W1001 16:51:01.622098    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:01.622166    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:01.633393    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:51:01.633408    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:01.633414    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:01.659343    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:01.659354    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:51:01.694652    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:01.694671    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:01.734330    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:51:01.734346    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:51:01.749887    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:51:01.749903    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:51:01.762025    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:51:01.762040    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:51:01.776749    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:51:01.776761    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:51:01.788543    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:51:01.788555    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:01.801105    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:01.801121    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:01.805743    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:51:01.805750    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:51:01.819548    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:51:01.819563    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:51:01.831018    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:51:01.831029    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:51:01.842800    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:51:01.842810    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:51:04.361753    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:09.364093    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:09.364498    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:09.399724    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:51:09.399875    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:09.417856    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:51:09.417972    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:09.438587    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:51:09.438687    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:09.449749    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:51:09.449831    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:09.460795    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:51:09.460878    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:09.471470    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:51:09.471541    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:09.481364    4927 logs.go:282] 0 containers: []
	W1001 16:51:09.481382    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:09.481438    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:09.491992    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:51:09.492007    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:51:09.492013    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:51:09.510610    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:51:09.510621    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:51:09.522538    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:51:09.522551    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:51:09.537189    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:51:09.537201    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:51:09.549203    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:51:09.549215    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:09.561971    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:09.561982    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:51:09.598259    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:09.598269    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:09.602822    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:51:09.602832    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:51:09.617266    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:51:09.617281    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:51:09.629191    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:51:09.629209    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:51:09.646167    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:09.646182    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:09.669788    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:09.669797    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:09.703982    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:51:09.703994    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:51:12.218712    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:17.219450    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:17.219637    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:17.231567    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:51:17.231650    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:17.242136    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:51:17.242220    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:17.253280    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:51:17.253359    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:17.263490    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:51:17.263570    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:17.273903    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:51:17.273990    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:17.284716    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:51:17.284800    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:17.294598    4927 logs.go:282] 0 containers: []
	W1001 16:51:17.294611    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:17.294683    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:17.304958    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:51:17.304976    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:17.304981    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:17.309626    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:51:17.309632    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:51:17.323259    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:51:17.323270    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:51:17.335122    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:51:17.335133    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:51:17.347475    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:17.347485    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:17.371090    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:17.371103    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:51:17.405848    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:17.405858    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:17.442021    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:51:17.442037    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:51:17.460231    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:51:17.460243    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:51:17.472138    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:51:17.472152    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:51:17.487219    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:51:17.487229    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:51:17.504181    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:51:17.504194    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:51:17.517088    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:51:17.517104    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:20.030781    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:25.032950    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:25.033078    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:25.046417    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:51:25.046509    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:25.060023    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:51:25.060110    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:25.070328    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:51:25.070401    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:25.081516    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:51:25.081603    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:25.092525    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:51:25.092611    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:25.103189    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:51:25.103277    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:25.113036    4927 logs.go:282] 0 containers: []
	W1001 16:51:25.113047    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:25.113120    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:25.123158    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:51:25.123173    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:51:25.123179    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:51:25.137690    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:25.137701    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:25.142510    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:25.142516    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:25.175821    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:51:25.175837    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:51:25.192184    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:51:25.192198    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:51:25.206110    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:51:25.206121    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:51:25.220438    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:51:25.220448    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:51:25.235469    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:51:25.235480    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:51:25.252958    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:25.252968    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:25.276097    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:25.276106    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:51:25.308974    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:51:25.308981    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:51:25.321019    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:51:25.321035    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:51:25.333403    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:51:25.333418    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:27.847339    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:32.849521    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:32.849773    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:32.867637    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:51:32.867732    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:32.880950    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:51:32.881042    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:32.891924    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:51:32.892003    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:32.907793    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:51:32.907879    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:32.917867    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:51:32.917952    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:32.928667    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:51:32.928745    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:32.939179    4927 logs.go:282] 0 containers: []
	W1001 16:51:32.939192    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:32.939266    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:32.950251    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:51:32.950267    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:32.950272    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:32.975592    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:51:32.975601    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:32.986560    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:32.986570    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:51:33.021144    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:51:33.021152    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:51:33.032381    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:51:33.032397    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:51:33.049041    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:51:33.049052    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:51:33.062905    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:51:33.062915    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:51:33.074266    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:51:33.074281    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:51:33.088865    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:51:33.088878    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:51:33.100159    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:51:33.100172    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:51:33.117839    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:33.117851    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:33.122455    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:33.122464    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:33.157432    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:51:33.157447    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:51:35.671401    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:40.673577    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:40.673701    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:40.688512    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:51:40.688598    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:40.699115    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:51:40.699210    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:40.709930    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:51:40.710014    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:40.720505    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:51:40.720588    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:40.731354    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:51:40.731436    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:40.742317    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:51:40.742399    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:40.752669    4927 logs.go:282] 0 containers: []
	W1001 16:51:40.752680    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:40.752745    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:40.763116    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:51:40.763133    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:51:40.763138    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:51:40.777046    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:51:40.777055    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:51:40.789536    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:51:40.789548    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:51:40.801188    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:51:40.801197    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:51:40.815796    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:40.815807    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:51:40.850805    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:40.850815    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:40.855272    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:40.855284    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:40.888548    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:51:40.888559    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:51:40.903632    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:51:40.903643    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:51:40.920593    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:40.920606    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:40.945459    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:51:40.945475    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:51:40.959979    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:51:40.959992    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:51:40.971951    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:51:40.971966    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:43.485596    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:48.487796    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:48.488109    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:48.508333    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:51:48.508455    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:48.522959    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:51:48.523055    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:48.535060    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:51:48.535150    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:48.545539    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:51:48.545616    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:48.556403    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:51:48.556488    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:48.566606    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:51:48.566694    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:48.577029    4927 logs.go:282] 0 containers: []
	W1001 16:51:48.577040    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:48.577113    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:48.587887    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:51:48.587903    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:51:48.587908    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:51:48.600250    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:51:48.600262    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:51:48.621393    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:48.621410    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:51:48.657522    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:51:48.657538    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:51:48.671347    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:51:48.671362    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:51:48.683348    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:51:48.683364    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:51:48.695062    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:51:48.695072    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:51:48.710579    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:51:48.710594    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:51:48.722184    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:48.722199    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:48.747482    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:51:48.747491    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:48.758576    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:48.758591    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:48.762959    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:48.762967    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:48.796875    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:51:48.796888    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:51:51.313732    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:51:56.315985    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:51:56.316173    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:51:56.329365    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:51:56.329444    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:51:56.345399    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:51:56.345473    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:51:56.355647    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:51:56.355731    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:51:56.367454    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:51:56.367548    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:51:56.378959    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:51:56.379045    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:51:56.389565    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:51:56.389640    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:51:56.400049    4927 logs.go:282] 0 containers: []
	W1001 16:51:56.400061    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:51:56.400127    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:51:56.410860    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:51:56.410877    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:51:56.410884    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:51:56.423217    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:51:56.423232    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:51:56.458939    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:51:56.458948    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:51:56.472648    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:51:56.472661    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:51:56.484314    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:51:56.484329    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:51:56.495039    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:51:56.495055    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:51:56.506880    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:51:56.506896    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:51:56.524083    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:51:56.524103    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:51:56.548213    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:51:56.548223    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:51:56.552285    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:51:56.552294    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:51:56.585934    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:51:56.585949    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:51:56.599973    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:51:56.599984    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:51:56.614008    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:51:56.614024    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:51:59.128154    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:52:04.125015    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:52:04.125312    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:52:04.149756    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:52:04.149898    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:52:04.165545    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:52:04.165651    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:52:04.183423    4927 logs.go:282] 2 containers: [f7caca5d7952 406124d13b16]
	I1001 16:52:04.183506    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:52:04.193942    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:52:04.194027    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:52:04.204251    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:52:04.204336    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:52:04.215025    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:52:04.215094    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:52:04.225710    4927 logs.go:282] 0 containers: []
	W1001 16:52:04.225721    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:52:04.225802    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:52:04.239871    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:52:04.239887    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:52:04.239893    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:52:04.244397    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:52:04.244407    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:52:04.278778    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:52:04.278794    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:52:04.291115    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:52:04.291126    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:52:04.302880    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:52:04.302896    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:52:04.314652    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:52:04.314667    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:52:04.339990    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:52:04.340005    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:52:04.379553    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:52:04.379570    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:52:04.414572    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:52:04.414585    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:52:04.441098    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:52:04.441117    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:52:04.472459    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:52:04.472472    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:52:04.489463    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:52:04.489476    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:52:04.507198    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:52:04.507210    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:52:07.020274    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:52:12.019340    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:52:12.019727    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:52:12.054136    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:52:12.054306    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:52:12.073167    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:52:12.073289    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:52:12.089249    4927 logs.go:282] 4 containers: [4fb7dc6e2140 8ce201a253c1 f7caca5d7952 406124d13b16]
	I1001 16:52:12.089345    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:52:12.102809    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:52:12.102896    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:52:12.113052    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:52:12.113137    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:52:12.128603    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:52:12.128683    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:52:12.144004    4927 logs.go:282] 0 containers: []
	W1001 16:52:12.144023    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:52:12.144097    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:52:12.154865    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:52:12.154886    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:52:12.154892    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:52:12.169584    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:52:12.169594    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:52:12.188666    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:52:12.188678    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:52:12.224156    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:52:12.224168    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:52:12.240176    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:52:12.240187    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:52:12.252269    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:52:12.252279    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:52:12.264748    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:52:12.264759    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:52:12.269278    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:52:12.269290    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:52:12.284867    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:52:12.284879    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:52:12.301715    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:52:12.301727    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:52:12.313251    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:52:12.313266    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:52:12.338004    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:52:12.338013    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:52:12.373001    4927 logs.go:123] Gathering logs for coredns [4fb7dc6e2140] ...
	I1001 16:52:12.373011    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb7dc6e2140"
	I1001 16:52:12.384604    4927 logs.go:123] Gathering logs for coredns [8ce201a253c1] ...
	I1001 16:52:12.384615    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ce201a253c1"
	I1001 16:52:12.396108    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:52:12.396121    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:52:14.908605    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:52:19.909069    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:52:19.909506    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:52:19.936709    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:52:19.936852    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:52:19.954761    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:52:19.954861    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:52:19.968003    4927 logs.go:282] 4 containers: [4fb7dc6e2140 8ce201a253c1 f7caca5d7952 406124d13b16]
	I1001 16:52:19.968091    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:52:19.979738    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:52:19.979814    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:52:19.998350    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:52:19.998417    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:52:20.009011    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:52:20.009087    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:52:20.020580    4927 logs.go:282] 0 containers: []
	W1001 16:52:20.020593    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:52:20.020664    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:52:20.031023    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:52:20.031041    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:52:20.031046    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:52:20.035426    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:52:20.035433    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:52:20.068324    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:52:20.068332    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:52:20.086150    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:52:20.086165    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:52:20.099889    4927 logs.go:123] Gathering logs for coredns [8ce201a253c1] ...
	I1001 16:52:20.099904    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ce201a253c1"
	I1001 16:52:20.111297    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:52:20.111308    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:52:20.123260    4927 logs.go:123] Gathering logs for coredns [4fb7dc6e2140] ...
	I1001 16:52:20.123276    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb7dc6e2140"
	I1001 16:52:20.134602    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:52:20.134620    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:52:20.146455    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:52:20.146464    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:52:20.160049    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:52:20.160061    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:52:20.200290    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:52:20.200304    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:52:20.212857    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:52:20.212868    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:52:20.237064    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:52:20.237080    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:52:20.252203    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:52:20.252217    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:52:20.269879    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:52:20.269894    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:52:22.794643    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:52:27.795707    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:52:27.795979    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:52:27.820831    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:52:27.820971    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:52:27.837537    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:52:27.837647    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:52:27.850652    4927 logs.go:282] 4 containers: [4fb7dc6e2140 8ce201a253c1 f7caca5d7952 406124d13b16]
	I1001 16:52:27.850748    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:52:27.862133    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:52:27.862211    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:52:27.872468    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:52:27.872540    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:52:27.882681    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:52:27.882774    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:52:27.893223    4927 logs.go:282] 0 containers: []
	W1001 16:52:27.893235    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:52:27.893311    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:52:27.903609    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:52:27.903628    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:52:27.903635    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:52:27.938975    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:52:27.938989    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:52:27.953362    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:52:27.953379    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:52:27.965494    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:52:27.965508    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:52:27.999298    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:52:27.999307    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:52:28.003304    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:52:28.003313    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:52:28.017352    4927 logs.go:123] Gathering logs for coredns [8ce201a253c1] ...
	I1001 16:52:28.017363    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ce201a253c1"
	I1001 16:52:28.029682    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:52:28.029695    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:52:28.042708    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:52:28.042722    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:52:28.058089    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:52:28.058105    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:52:28.070571    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:52:28.070583    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:52:28.089494    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:52:28.089504    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:52:28.100561    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:52:28.100573    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:52:28.126462    4927 logs.go:123] Gathering logs for coredns [4fb7dc6e2140] ...
	I1001 16:52:28.126479    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb7dc6e2140"
	I1001 16:52:28.140991    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:52:28.141003    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:52:30.657166    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:52:35.656776    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:52:35.656993    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:52:35.679906    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:52:35.680008    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:52:35.690810    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:52:35.690897    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:52:35.701325    4927 logs.go:282] 4 containers: [4fb7dc6e2140 8ce201a253c1 f7caca5d7952 406124d13b16]
	I1001 16:52:35.701410    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:52:35.712322    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:52:35.712409    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:52:35.723086    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:52:35.723180    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:52:35.733334    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:52:35.733408    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:52:35.748008    4927 logs.go:282] 0 containers: []
	W1001 16:52:35.748019    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:52:35.748090    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:52:35.758934    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:52:35.758951    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:52:35.758958    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:52:35.773932    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:52:35.773942    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:52:35.787857    4927 logs.go:123] Gathering logs for coredns [4fb7dc6e2140] ...
	I1001 16:52:35.787868    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb7dc6e2140"
	I1001 16:52:35.803058    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:52:35.803073    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:52:35.817966    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:52:35.817977    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:52:35.834848    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:52:35.834859    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:52:35.846456    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:52:35.846466    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:52:35.871364    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:52:35.871372    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:52:35.875359    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:52:35.875368    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:52:35.886968    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:52:35.886983    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:52:35.922242    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:52:35.922252    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:52:35.939055    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:52:35.939071    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:52:35.951130    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:52:35.951145    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:52:35.964906    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:52:35.964920    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:52:35.999677    4927 logs.go:123] Gathering logs for coredns [8ce201a253c1] ...
	I1001 16:52:35.999689    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ce201a253c1"
	I1001 16:52:38.513194    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:52:43.515437    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:52:43.516057    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:52:43.565105    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:52:43.565222    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:52:43.583874    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:52:43.583971    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:52:43.597158    4927 logs.go:282] 4 containers: [4fb7dc6e2140 8ce201a253c1 f7caca5d7952 406124d13b16]
	I1001 16:52:43.597245    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:52:43.608394    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:52:43.608475    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:52:43.619516    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:52:43.619600    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:52:43.630200    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:52:43.630287    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:52:43.640964    4927 logs.go:282] 0 containers: []
	W1001 16:52:43.640974    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:52:43.641037    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:52:43.652169    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:52:43.652186    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:52:43.652191    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:52:43.687155    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:52:43.687165    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:52:43.691977    4927 logs.go:123] Gathering logs for coredns [8ce201a253c1] ...
	I1001 16:52:43.691985    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ce201a253c1"
	I1001 16:52:43.704048    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:52:43.704058    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:52:43.715760    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:52:43.715769    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:52:43.728190    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:52:43.728201    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:52:43.752273    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:52:43.752279    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:52:43.766602    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:52:43.766612    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:52:43.800464    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:52:43.800474    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:52:43.815332    4927 logs.go:123] Gathering logs for coredns [4fb7dc6e2140] ...
	I1001 16:52:43.815341    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb7dc6e2140"
	I1001 16:52:43.827298    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:52:43.827313    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:52:43.838733    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:52:43.838743    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:52:43.858371    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:52:43.858382    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:52:43.876461    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:52:43.876470    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:52:43.888520    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:52:43.888536    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:52:46.402384    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:52:51.404629    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:52:51.404830    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:52:51.416474    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:52:51.416565    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:52:51.427084    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:52:51.427164    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:52:51.437676    4927 logs.go:282] 4 containers: [4fb7dc6e2140 8ce201a253c1 f7caca5d7952 406124d13b16]
	I1001 16:52:51.437754    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:52:51.448210    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:52:51.448288    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:52:51.463061    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:52:51.463142    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:52:51.473978    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:52:51.474057    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:52:51.494708    4927 logs.go:282] 0 containers: []
	W1001 16:52:51.494720    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:52:51.494785    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:52:51.505315    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:52:51.505332    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:52:51.505338    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:52:51.538698    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:52:51.538705    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:52:51.573655    4927 logs.go:123] Gathering logs for coredns [4fb7dc6e2140] ...
	I1001 16:52:51.573668    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb7dc6e2140"
	I1001 16:52:51.589701    4927 logs.go:123] Gathering logs for coredns [8ce201a253c1] ...
	I1001 16:52:51.589713    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ce201a253c1"
	I1001 16:52:51.605079    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:52:51.605094    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:52:51.618577    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:52:51.618586    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:52:51.640375    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:52:51.640388    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:52:51.658662    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:52:51.658677    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:52:51.684308    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:52:51.684316    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:52:51.695793    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:52:51.695808    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:52:51.700423    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:52:51.700433    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:52:51.716246    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:52:51.716257    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:52:51.730459    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:52:51.730467    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:52:51.744347    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:52:51.744363    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:52:51.759435    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:52:51.759446    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:52:54.272997    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:52:59.275572    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:52:59.276173    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:52:59.316039    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:52:59.316226    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:52:59.343958    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:52:59.344073    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:52:59.360821    4927 logs.go:282] 4 containers: [4fb7dc6e2140 8ce201a253c1 f7caca5d7952 406124d13b16]
	I1001 16:52:59.360917    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:52:59.372910    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:52:59.372977    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:52:59.383853    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:52:59.383917    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:52:59.394751    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:52:59.394813    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:52:59.405327    4927 logs.go:282] 0 containers: []
	W1001 16:52:59.405341    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:52:59.405405    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:52:59.417079    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:52:59.417100    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:52:59.417105    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:52:59.421305    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:52:59.421313    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:52:59.433362    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:52:59.433377    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:52:59.449378    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:52:59.449386    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:52:59.464951    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:52:59.464967    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:52:59.482446    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:52:59.482455    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:52:59.501325    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:52:59.501335    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:52:59.513584    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:52:59.513595    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:52:59.547779    4927 logs.go:123] Gathering logs for coredns [4fb7dc6e2140] ...
	I1001 16:52:59.547795    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb7dc6e2140"
	I1001 16:52:59.560135    4927 logs.go:123] Gathering logs for coredns [8ce201a253c1] ...
	I1001 16:52:59.560144    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ce201a253c1"
	I1001 16:52:59.572118    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:52:59.572129    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:52:59.584409    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:52:59.584418    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:52:59.618956    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:52:59.618963    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:52:59.635639    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:52:59.635649    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:52:59.647610    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:52:59.647621    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:53:02.173092    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:53:07.175876    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:53:07.176458    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:53:07.216444    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:53:07.216617    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:53:07.241572    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:53:07.241685    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:53:07.255904    4927 logs.go:282] 4 containers: [4fb7dc6e2140 8ce201a253c1 f7caca5d7952 406124d13b16]
	I1001 16:53:07.256004    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:53:07.273430    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:53:07.273518    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:53:07.284685    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:53:07.284766    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:53:07.295071    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:53:07.295151    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:53:07.305625    4927 logs.go:282] 0 containers: []
	W1001 16:53:07.305637    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:53:07.305708    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:53:07.315949    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:53:07.315967    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:53:07.315972    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:53:07.356438    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:53:07.356450    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:53:07.368513    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:53:07.368529    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:53:07.380910    4927 logs.go:123] Gathering logs for coredns [8ce201a253c1] ...
	I1001 16:53:07.380921    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ce201a253c1"
	I1001 16:53:07.394007    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:53:07.394022    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:53:07.406016    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:53:07.406031    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:53:07.421764    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:53:07.421773    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:53:07.456507    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:53:07.456515    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:53:07.471199    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:53:07.471208    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:53:07.485341    4927 logs.go:123] Gathering logs for coredns [4fb7dc6e2140] ...
	I1001 16:53:07.485350    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb7dc6e2140"
	I1001 16:53:07.497706    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:53:07.497715    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:53:07.509795    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:53:07.509804    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:53:07.514260    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:53:07.514269    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:53:07.533361    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:53:07.533374    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:53:07.550507    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:53:07.550516    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:53:10.078470    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:53:15.080612    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:53:15.081152    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:53:15.117306    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:53:15.117464    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:53:15.138360    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:53:15.138479    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:53:15.153871    4927 logs.go:282] 4 containers: [4fb7dc6e2140 8ce201a253c1 f7caca5d7952 406124d13b16]
	I1001 16:53:15.153952    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:53:15.166570    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:53:15.166642    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:53:15.180774    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:53:15.180837    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:53:15.191488    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:53:15.191572    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:53:15.205902    4927 logs.go:282] 0 containers: []
	W1001 16:53:15.205913    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:53:15.205984    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:53:15.216898    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:53:15.216918    4927 logs.go:123] Gathering logs for coredns [8ce201a253c1] ...
	I1001 16:53:15.216923    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ce201a253c1"
	I1001 16:53:15.236967    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:53:15.236976    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:53:15.261849    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:53:15.261860    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:53:15.273762    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:53:15.273774    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:53:15.291064    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:53:15.291076    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:53:15.302948    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:53:15.302959    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:53:15.343058    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:53:15.343070    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:53:15.358500    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:53:15.358511    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:53:15.372936    4927 logs.go:123] Gathering logs for coredns [4fb7dc6e2140] ...
	I1001 16:53:15.372946    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb7dc6e2140"
	I1001 16:53:15.384758    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:53:15.384772    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:53:15.418571    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:53:15.418581    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:53:15.423520    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:53:15.423530    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:53:15.435230    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:53:15.435240    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:53:15.450066    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:53:15.450077    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:53:15.461541    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:53:15.461556    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:53:17.975142    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:53:22.977860    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:53:22.978415    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:53:23.019173    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:53:23.019347    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:53:23.041208    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:53:23.041340    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:53:23.056696    4927 logs.go:282] 4 containers: [4fb7dc6e2140 8ce201a253c1 f7caca5d7952 406124d13b16]
	I1001 16:53:23.056797    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:53:23.068616    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:53:23.068713    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:53:23.079589    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:53:23.079670    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:53:23.090291    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:53:23.090377    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:53:23.104326    4927 logs.go:282] 0 containers: []
	W1001 16:53:23.104335    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:53:23.104395    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:53:23.114741    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:53:23.114757    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:53:23.114762    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:53:23.119451    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:53:23.119458    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:53:23.153323    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:53:23.153332    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:53:23.168701    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:53:23.168712    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:53:23.191771    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:53:23.191782    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:53:23.205761    4927 logs.go:123] Gathering logs for coredns [8ce201a253c1] ...
	I1001 16:53:23.205773    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ce201a253c1"
	I1001 16:53:23.217167    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:53:23.217180    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:53:23.251554    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:53:23.251562    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:53:23.265313    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:53:23.265325    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:53:23.276809    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:53:23.276818    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:53:23.294046    4927 logs.go:123] Gathering logs for coredns [4fb7dc6e2140] ...
	I1001 16:53:23.294056    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb7dc6e2140"
	I1001 16:53:23.305848    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:53:23.305863    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:53:23.317515    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:53:23.317525    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:53:23.328700    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:53:23.328709    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:53:23.339950    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:53:23.339961    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:53:25.854551    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:53:30.856607    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:53:30.856764    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:53:30.870505    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:53:30.870593    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:53:30.881186    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:53:30.881271    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:53:30.891614    4927 logs.go:282] 4 containers: [4fb7dc6e2140 8ce201a253c1 f7caca5d7952 406124d13b16]
	I1001 16:53:30.891702    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:53:30.902099    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:53:30.902175    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:53:30.912178    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:53:30.912261    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:53:30.922859    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:53:30.922938    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:53:30.933066    4927 logs.go:282] 0 containers: []
	W1001 16:53:30.933081    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:53:30.933152    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:53:30.944042    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:53:30.944059    4927 logs.go:123] Gathering logs for coredns [4fb7dc6e2140] ...
	I1001 16:53:30.944065    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb7dc6e2140"
	I1001 16:53:30.955936    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:53:30.955947    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:53:30.968007    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:53:30.968017    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:53:30.979536    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:53:30.979552    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:53:30.991097    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:53:30.991111    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:53:30.995528    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:53:30.995534    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:53:31.035043    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:53:31.035052    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:53:31.046800    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:53:31.046816    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:53:31.064415    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:53:31.064427    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:53:31.097190    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:53:31.097197    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:53:31.108685    4927 logs.go:123] Gathering logs for coredns [8ce201a253c1] ...
	I1001 16:53:31.108696    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ce201a253c1"
	I1001 16:53:31.124818    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:53:31.124834    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:53:31.138934    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:53:31.138944    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:53:31.153886    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:53:31.153896    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:53:31.177178    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:53:31.177185    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:53:33.693228    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:53:38.693997    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:53:38.694316    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:53:38.726744    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:53:38.726940    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:53:38.751094    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:53:38.751171    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:53:38.762596    4927 logs.go:282] 4 containers: [4fb7dc6e2140 8ce201a253c1 f7caca5d7952 406124d13b16]
	I1001 16:53:38.762686    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:53:38.774641    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:53:38.774738    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:53:38.786116    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:53:38.786189    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:53:38.798056    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:53:38.798135    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:53:38.817234    4927 logs.go:282] 0 containers: []
	W1001 16:53:38.817247    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:53:38.817316    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:53:38.830302    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:53:38.830319    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:53:38.830324    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:53:38.866926    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:53:38.866945    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:53:38.882026    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:53:38.882037    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:53:38.907104    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:53:38.907115    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:53:38.918721    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:53:38.918733    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:53:38.932543    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:53:38.932559    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:53:38.944506    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:53:38.944521    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:53:38.958063    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:53:38.958073    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:53:38.970025    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:53:38.970045    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:53:38.984232    4927 logs.go:123] Gathering logs for coredns [4fb7dc6e2140] ...
	I1001 16:53:38.984245    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb7dc6e2140"
	I1001 16:53:38.998527    4927 logs.go:123] Gathering logs for coredns [8ce201a253c1] ...
	I1001 16:53:38.998539    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ce201a253c1"
	I1001 16:53:39.009888    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:53:39.009904    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:53:39.015168    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:53:39.015179    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:53:39.049085    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:53:39.049098    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:53:39.063408    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:53:39.063418    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:53:41.582940    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:53:46.585175    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:53:46.585670    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1001 16:53:46.618678    4927 logs.go:282] 1 containers: [ea1cd366ffab]
	I1001 16:53:46.618839    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1001 16:53:46.638695    4927 logs.go:282] 1 containers: [0e92518fef05]
	I1001 16:53:46.638829    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1001 16:53:46.654087    4927 logs.go:282] 4 containers: [4fb7dc6e2140 8ce201a253c1 f7caca5d7952 406124d13b16]
	I1001 16:53:46.654182    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1001 16:53:46.666698    4927 logs.go:282] 1 containers: [cdd41a59f1a1]
	I1001 16:53:46.666782    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1001 16:53:46.677524    4927 logs.go:282] 1 containers: [10fd1adda049]
	I1001 16:53:46.677594    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1001 16:53:46.688148    4927 logs.go:282] 1 containers: [7af640a264d1]
	I1001 16:53:46.688231    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1001 16:53:46.700085    4927 logs.go:282] 0 containers: []
	W1001 16:53:46.700098    4927 logs.go:284] No container was found matching "kindnet"
	I1001 16:53:46.700175    4927 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1001 16:53:46.710942    4927 logs.go:282] 1 containers: [a592b1176087]
	I1001 16:53:46.710957    4927 logs.go:123] Gathering logs for Docker ...
	I1001 16:53:46.710962    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1001 16:53:46.735320    4927 logs.go:123] Gathering logs for container status ...
	I1001 16:53:46.735328    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 16:53:46.747068    4927 logs.go:123] Gathering logs for coredns [f7caca5d7952] ...
	I1001 16:53:46.747079    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f7caca5d7952"
	I1001 16:53:46.760157    4927 logs.go:123] Gathering logs for storage-provisioner [a592b1176087] ...
	I1001 16:53:46.760167    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a592b1176087"
	I1001 16:53:46.771644    4927 logs.go:123] Gathering logs for coredns [406124d13b16] ...
	I1001 16:53:46.771658    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 406124d13b16"
	I1001 16:53:46.783798    4927 logs.go:123] Gathering logs for kube-scheduler [cdd41a59f1a1] ...
	I1001 16:53:46.783807    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdd41a59f1a1"
	I1001 16:53:46.803222    4927 logs.go:123] Gathering logs for dmesg ...
	I1001 16:53:46.803233    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 16:53:46.807402    4927 logs.go:123] Gathering logs for coredns [8ce201a253c1] ...
	I1001 16:53:46.807408    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ce201a253c1"
	I1001 16:53:46.819316    4927 logs.go:123] Gathering logs for kube-proxy [10fd1adda049] ...
	I1001 16:53:46.819329    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 10fd1adda049"
	I1001 16:53:46.831371    4927 logs.go:123] Gathering logs for kube-controller-manager [7af640a264d1] ...
	I1001 16:53:46.831383    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7af640a264d1"
	I1001 16:53:46.849748    4927 logs.go:123] Gathering logs for kube-apiserver [ea1cd366ffab] ...
	I1001 16:53:46.849759    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea1cd366ffab"
	I1001 16:53:46.863918    4927 logs.go:123] Gathering logs for etcd [0e92518fef05] ...
	I1001 16:53:46.863931    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e92518fef05"
	I1001 16:53:46.878433    4927 logs.go:123] Gathering logs for coredns [4fb7dc6e2140] ...
	I1001 16:53:46.878446    4927 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4fb7dc6e2140"
	I1001 16:53:46.890521    4927 logs.go:123] Gathering logs for kubelet ...
	I1001 16:53:46.890531    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1001 16:53:46.924260    4927 logs.go:123] Gathering logs for describe nodes ...
	I1001 16:53:46.924270    4927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 16:53:49.469601    4927 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1001 16:53:54.472245    4927 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1001 16:53:54.476874    4927 out.go:201] 
	W1001 16:53:54.481732    4927 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1001 16:53:54.481748    4927 out.go:270] * 
	* 
	W1001 16:53:54.482884    4927 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:53:54.511816    4927 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-342000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (580.59s)

                                                
                                    
x
+
TestPause/serial/Start (10s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-727000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-727000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.932902917s)

                                                
                                                
-- stdout --
	* [pause-727000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-727000" primary control-plane node in "pause-727000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-727000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-727000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-727000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-727000 -n pause-727000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-727000 -n pause-727000: exit status 7 (67.2475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-727000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-908000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-908000 --driver=qemu2 : exit status 80 (9.781236375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-908000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-908000" primary control-plane node in "NoKubernetes-908000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-908000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-908000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-908000 -n NoKubernetes-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-908000 -n NoKubernetes-908000: exit status 7 (69.594833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-908000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-908000 --no-kubernetes --driver=qemu2 : exit status 80 (6.277194458s)

                                                
                                                
-- stdout --
	* [NoKubernetes-908000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-908000
	* Restarting existing qemu2 VM for "NoKubernetes-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-908000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-908000 -n NoKubernetes-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-908000 -n NoKubernetes-908000: exit status 7 (46.581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (6.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-908000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-908000 --no-kubernetes --driver=qemu2 : exit status 80 (6.31649s)

                                                
                                                
-- stdout --
	* [NoKubernetes-908000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-908000
	* Restarting existing qemu2 VM for "NoKubernetes-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-908000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-908000 -n NoKubernetes-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-908000 -n NoKubernetes-908000: exit status 7 (63.318417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (6.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-908000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-908000 --driver=qemu2 : exit status 80 (6.304788875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-908000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-908000
	* Restarting existing qemu2 VM for "NoKubernetes-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-908000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-908000 -n NoKubernetes-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-908000 -n NoKubernetes-908000: exit status 7 (65.562834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (6.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.880671458s)

                                                
                                                
-- stdout --
	* [auto-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-870000" primary control-plane node in "auto-870000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-870000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:53:51.466793    5130 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:53:51.466912    5130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:53:51.466915    5130 out.go:358] Setting ErrFile to fd 2...
	I1001 16:53:51.466917    5130 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:53:51.467036    5130 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:53:51.468094    5130 out.go:352] Setting JSON to false
	I1001 16:53:51.484211    5130 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4999,"bootTime":1727821832,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:53:51.484286    5130 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:53:51.491649    5130 out.go:177] * [auto-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:53:51.498581    5130 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:53:51.498643    5130 notify.go:220] Checking for updates...
	I1001 16:53:51.506499    5130 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:53:51.509581    5130 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:53:51.512563    5130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:53:51.515557    5130 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:53:51.518615    5130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:53:51.521879    5130 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:53:51.521940    5130 config.go:182] Loaded profile config "stopped-upgrade-342000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 16:53:51.521993    5130 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:53:51.526596    5130 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:53:51.533519    5130 start.go:297] selected driver: qemu2
	I1001 16:53:51.533525    5130 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:53:51.533531    5130 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:53:51.535681    5130 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:53:51.538580    5130 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:53:51.541693    5130 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:53:51.541716    5130 cni.go:84] Creating CNI manager for ""
	I1001 16:53:51.541736    5130 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:53:51.541742    5130 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 16:53:51.541784    5130 start.go:340] cluster config:
	{Name:auto-870000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_cli
ent SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:53:51.545200    5130 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:53:51.552582    5130 out.go:177] * Starting "auto-870000" primary control-plane node in "auto-870000" cluster
	I1001 16:53:51.556613    5130 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:53:51.556626    5130 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:53:51.556634    5130 cache.go:56] Caching tarball of preloaded images
	I1001 16:53:51.556684    5130 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:53:51.556689    5130 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:53:51.556745    5130 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/auto-870000/config.json ...
	I1001 16:53:51.556755    5130 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/auto-870000/config.json: {Name:mk1479c05e5e8c6713e98ed3453773d01e6e25d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:53:51.556967    5130 start.go:360] acquireMachinesLock for auto-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:53:51.556999    5130 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "auto-870000"
	I1001 16:53:51.557011    5130 start.go:93] Provisioning new machine with config: &{Name:auto-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:auto-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:53:51.557037    5130 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:53:51.565555    5130 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:53:51.580654    5130 start.go:159] libmachine.API.Create for "auto-870000" (driver="qemu2")
	I1001 16:53:51.580678    5130 client.go:168] LocalClient.Create starting
	I1001 16:53:51.580739    5130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:53:51.580771    5130 main.go:141] libmachine: Decoding PEM data...
	I1001 16:53:51.580783    5130 main.go:141] libmachine: Parsing certificate...
	I1001 16:53:51.580824    5130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:53:51.580846    5130 main.go:141] libmachine: Decoding PEM data...
	I1001 16:53:51.580854    5130 main.go:141] libmachine: Parsing certificate...
	I1001 16:53:51.581215    5130 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:53:51.740069    5130 main.go:141] libmachine: Creating SSH key...
	I1001 16:53:51.860017    5130 main.go:141] libmachine: Creating Disk image...
	I1001 16:53:51.860027    5130 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:53:51.860435    5130 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/disk.qcow2
	I1001 16:53:51.869576    5130 main.go:141] libmachine: STDOUT: 
	I1001 16:53:51.869597    5130 main.go:141] libmachine: STDERR: 
	I1001 16:53:51.869650    5130 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/disk.qcow2 +20000M
	I1001 16:53:51.877469    5130 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:53:51.877486    5130 main.go:141] libmachine: STDERR: 
	I1001 16:53:51.877506    5130 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/disk.qcow2
	I1001 16:53:51.877514    5130 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:53:51.877525    5130 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:53:51.877555    5130 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:6a:f7:87:85:af -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/disk.qcow2
	I1001 16:53:51.879107    5130 main.go:141] libmachine: STDOUT: 
	I1001 16:53:51.879120    5130 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:53:51.879141    5130 client.go:171] duration metric: took 298.461834ms to LocalClient.Create
	I1001 16:53:53.881355    5130 start.go:128] duration metric: took 2.324330792s to createHost
	I1001 16:53:53.881462    5130 start.go:83] releasing machines lock for "auto-870000", held for 2.3244935s
	W1001 16:53:53.881553    5130 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:53:53.892506    5130 out.go:177] * Deleting "auto-870000" in qemu2 ...
	W1001 16:53:53.925668    5130 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:53:53.925700    5130 start.go:729] Will try again in 5 seconds ...
	I1001 16:53:58.927866    5130 start.go:360] acquireMachinesLock for auto-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:53:58.928338    5130 start.go:364] duration metric: took 384.084µs to acquireMachinesLock for "auto-870000"
	I1001 16:53:58.928473    5130 start.go:93] Provisioning new machine with config: &{Name:auto-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:auto-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:53:58.928739    5130 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:53:58.933330    5130 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:53:58.975016    5130 start.go:159] libmachine.API.Create for "auto-870000" (driver="qemu2")
	I1001 16:53:58.975085    5130 client.go:168] LocalClient.Create starting
	I1001 16:53:58.975198    5130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:53:58.975263    5130 main.go:141] libmachine: Decoding PEM data...
	I1001 16:53:58.975277    5130 main.go:141] libmachine: Parsing certificate...
	I1001 16:53:58.975350    5130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:53:58.975389    5130 main.go:141] libmachine: Decoding PEM data...
	I1001 16:53:58.975402    5130 main.go:141] libmachine: Parsing certificate...
	I1001 16:53:58.975980    5130 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:53:59.138834    5130 main.go:141] libmachine: Creating SSH key...
	I1001 16:53:59.255246    5130 main.go:141] libmachine: Creating Disk image...
	I1001 16:53:59.255254    5130 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:53:59.255541    5130 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/disk.qcow2
	I1001 16:53:59.265091    5130 main.go:141] libmachine: STDOUT: 
	I1001 16:53:59.265105    5130 main.go:141] libmachine: STDERR: 
	I1001 16:53:59.265168    5130 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/disk.qcow2 +20000M
	I1001 16:53:59.273214    5130 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:53:59.273231    5130 main.go:141] libmachine: STDERR: 
	I1001 16:53:59.273244    5130 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/disk.qcow2
	I1001 16:53:59.273250    5130 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:53:59.273259    5130 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:53:59.273291    5130 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:f6:33:5f:69:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/auto-870000/disk.qcow2
	I1001 16:53:59.274955    5130 main.go:141] libmachine: STDOUT: 
	I1001 16:53:59.274975    5130 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:53:59.274988    5130 client.go:171] duration metric: took 299.901917ms to LocalClient.Create
	I1001 16:54:01.277270    5130 start.go:128] duration metric: took 2.348537542s to createHost
	I1001 16:54:01.277349    5130 start.go:83] releasing machines lock for "auto-870000", held for 2.349028792s
	W1001 16:54:01.277663    5130 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:01.287320    5130 out.go:201] 
	W1001 16:54:01.295496    5130 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:54:01.295537    5130 out.go:270] * 
	* 
	W1001 16:54:01.298472    5130 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:54:01.308269    5130 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.968652125s)

                                                
                                                
-- stdout --
	* [kindnet-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-870000" primary control-plane node in "kindnet-870000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-870000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:54:03.489553    5243 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:54:03.489690    5243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:54:03.489694    5243 out.go:358] Setting ErrFile to fd 2...
	I1001 16:54:03.489696    5243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:54:03.489847    5243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:54:03.490937    5243 out.go:352] Setting JSON to false
	I1001 16:54:03.506941    5243 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5011,"bootTime":1727821832,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:54:03.507024    5243 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:54:03.512566    5243 out.go:177] * [kindnet-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:54:03.520430    5243 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:54:03.520476    5243 notify.go:220] Checking for updates...
	I1001 16:54:03.527582    5243 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:54:03.529038    5243 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:54:03.532545    5243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:54:03.535599    5243 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:54:03.538592    5243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:54:03.541925    5243 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:54:03.541990    5243 config.go:182] Loaded profile config "stopped-upgrade-342000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 16:54:03.542045    5243 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:54:03.546552    5243 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:54:03.553515    5243 start.go:297] selected driver: qemu2
	I1001 16:54:03.553521    5243 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:54:03.553528    5243 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:54:03.555643    5243 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:54:03.558500    5243 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:54:03.561647    5243 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:54:03.561671    5243 cni.go:84] Creating CNI manager for "kindnet"
	I1001 16:54:03.561676    5243 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 16:54:03.561715    5243 start.go:340] cluster config:
	{Name:kindnet-870000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/soc
ket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:54:03.565276    5243 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:54:03.568559    5243 out.go:177] * Starting "kindnet-870000" primary control-plane node in "kindnet-870000" cluster
	I1001 16:54:03.575524    5243 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:54:03.575539    5243 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:54:03.575553    5243 cache.go:56] Caching tarball of preloaded images
	I1001 16:54:03.575615    5243 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:54:03.575620    5243 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:54:03.575679    5243 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/kindnet-870000/config.json ...
	I1001 16:54:03.575689    5243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/kindnet-870000/config.json: {Name:mk147972a6f1a85913b22b5c46e067699892d169 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:54:03.575893    5243 start.go:360] acquireMachinesLock for kindnet-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:54:03.575923    5243 start.go:364] duration metric: took 24.125µs to acquireMachinesLock for "kindnet-870000"
	I1001 16:54:03.575934    5243 start.go:93] Provisioning new machine with config: &{Name:kindnet-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kindnet-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:54:03.575963    5243 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:54:03.584382    5243 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:54:03.599840    5243 start.go:159] libmachine.API.Create for "kindnet-870000" (driver="qemu2")
	I1001 16:54:03.599867    5243 client.go:168] LocalClient.Create starting
	I1001 16:54:03.599924    5243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:54:03.599966    5243 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:03.599979    5243 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:03.600019    5243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:54:03.600044    5243 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:03.600050    5243 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:03.600435    5243 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:54:03.756753    5243 main.go:141] libmachine: Creating SSH key...
	I1001 16:54:03.844902    5243 main.go:141] libmachine: Creating Disk image...
	I1001 16:54:03.844912    5243 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:54:03.845181    5243 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/disk.qcow2
	I1001 16:54:03.854254    5243 main.go:141] libmachine: STDOUT: 
	I1001 16:54:03.854277    5243 main.go:141] libmachine: STDERR: 
	I1001 16:54:03.854337    5243 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/disk.qcow2 +20000M
	I1001 16:54:03.862171    5243 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:54:03.862187    5243 main.go:141] libmachine: STDERR: 
	I1001 16:54:03.862209    5243 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/disk.qcow2
	I1001 16:54:03.862213    5243 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:54:03.862225    5243 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:54:03.862262    5243 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:6c:58:54:58:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/disk.qcow2
	I1001 16:54:03.863897    5243 main.go:141] libmachine: STDOUT: 
	I1001 16:54:03.863913    5243 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:54:03.863936    5243 client.go:171] duration metric: took 264.067375ms to LocalClient.Create
	I1001 16:54:05.865872    5243 start.go:128] duration metric: took 2.289914709s to createHost
	I1001 16:54:05.865951    5243 start.go:83] releasing machines lock for "kindnet-870000", held for 2.290057167s
	W1001 16:54:05.865995    5243 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:05.879191    5243 out.go:177] * Deleting "kindnet-870000" in qemu2 ...
	W1001 16:54:05.908542    5243 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:05.908566    5243 start.go:729] Will try again in 5 seconds ...
	I1001 16:54:10.908828    5243 start.go:360] acquireMachinesLock for kindnet-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:54:10.909329    5243 start.go:364] duration metric: took 419.834µs to acquireMachinesLock for "kindnet-870000"
	I1001 16:54:10.909445    5243 start.go:93] Provisioning new machine with config: &{Name:kindnet-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kindnet-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:54:10.909709    5243 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:54:10.927177    5243 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:54:10.960514    5243 start.go:159] libmachine.API.Create for "kindnet-870000" (driver="qemu2")
	I1001 16:54:10.960572    5243 client.go:168] LocalClient.Create starting
	I1001 16:54:10.960683    5243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:54:10.960744    5243 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:10.960756    5243 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:10.960811    5243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:54:10.960849    5243 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:10.960857    5243 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:10.961290    5243 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:54:11.212924    5243 main.go:141] libmachine: Creating SSH key...
	I1001 16:54:11.369689    5243 main.go:141] libmachine: Creating Disk image...
	I1001 16:54:11.369702    5243 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:54:11.369910    5243 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/disk.qcow2
	I1001 16:54:11.380013    5243 main.go:141] libmachine: STDOUT: 
	I1001 16:54:11.380031    5243 main.go:141] libmachine: STDERR: 
	I1001 16:54:11.380114    5243 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/disk.qcow2 +20000M
	I1001 16:54:11.389367    5243 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:54:11.389394    5243 main.go:141] libmachine: STDERR: 
	I1001 16:54:11.389410    5243 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/disk.qcow2
	I1001 16:54:11.389419    5243 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:54:11.389444    5243 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:54:11.389477    5243 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:9b:fe:d5:5a:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kindnet-870000/disk.qcow2
	I1001 16:54:11.391524    5243 main.go:141] libmachine: STDOUT: 
	I1001 16:54:11.391541    5243 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:54:11.391556    5243 client.go:171] duration metric: took 430.986875ms to LocalClient.Create
	I1001 16:54:13.393637    5243 start.go:128] duration metric: took 2.483948458s to createHost
	I1001 16:54:13.393684    5243 start.go:83] releasing machines lock for "kindnet-870000", held for 2.484379709s
	W1001 16:54:13.393913    5243 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:13.403357    5243 out.go:201] 
	W1001 16:54:13.407414    5243 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:54:13.407440    5243 out.go:270] * 
	* 
	W1001 16:54:13.408988    5243 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:54:13.419136    5243 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (10.173761834s)

                                                
                                                
-- stdout --
	* [calico-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-870000" primary control-plane node in "calico-870000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-870000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:54:15.680457    5363 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:54:15.680621    5363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:54:15.680624    5363 out.go:358] Setting ErrFile to fd 2...
	I1001 16:54:15.680626    5363 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:54:15.680765    5363 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:54:15.681871    5363 out.go:352] Setting JSON to false
	I1001 16:54:15.698187    5363 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5023,"bootTime":1727821832,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:54:15.698263    5363 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:54:15.704792    5363 out.go:177] * [calico-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:54:15.712703    5363 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:54:15.712717    5363 notify.go:220] Checking for updates...
	I1001 16:54:15.719659    5363 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:54:15.722713    5363 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:54:15.725717    5363 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:54:15.728736    5363 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:54:15.731655    5363 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:54:15.735018    5363 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:54:15.735082    5363 config.go:182] Loaded profile config "stopped-upgrade-342000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1001 16:54:15.735133    5363 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:54:15.739593    5363 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:54:15.746686    5363 start.go:297] selected driver: qemu2
	I1001 16:54:15.746694    5363 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:54:15.746700    5363 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:54:15.749076    5363 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:54:15.752652    5363 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:54:15.755731    5363 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:54:15.755748    5363 cni.go:84] Creating CNI manager for "calico"
	I1001 16:54:15.755757    5363 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1001 16:54:15.755798    5363 start.go:340] cluster config:
	{Name:calico-870000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:54:15.759547    5363 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:54:15.766652    5363 out.go:177] * Starting "calico-870000" primary control-plane node in "calico-870000" cluster
	I1001 16:54:15.770673    5363 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:54:15.770688    5363 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:54:15.770697    5363 cache.go:56] Caching tarball of preloaded images
	I1001 16:54:15.770765    5363 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:54:15.770772    5363 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:54:15.770850    5363 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/calico-870000/config.json ...
	I1001 16:54:15.770866    5363 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/calico-870000/config.json: {Name:mk5feb5f6380f987d3c761fc8d37165de1207af2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:54:15.771079    5363 start.go:360] acquireMachinesLock for calico-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:54:15.771110    5363 start.go:364] duration metric: took 25.667µs to acquireMachinesLock for "calico-870000"
	I1001 16:54:15.771122    5363 start.go:93] Provisioning new machine with config: &{Name:calico-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:calico-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:54:15.771153    5363 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:54:15.779698    5363 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:54:15.796672    5363 start.go:159] libmachine.API.Create for "calico-870000" (driver="qemu2")
	I1001 16:54:15.796708    5363 client.go:168] LocalClient.Create starting
	I1001 16:54:15.796770    5363 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:54:15.796800    5363 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:15.796810    5363 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:15.796858    5363 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:54:15.796880    5363 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:15.796892    5363 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:15.797290    5363 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:54:15.954160    5363 main.go:141] libmachine: Creating SSH key...
	I1001 16:54:16.151406    5363 main.go:141] libmachine: Creating Disk image...
	I1001 16:54:16.151417    5363 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:54:16.151698    5363 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/disk.qcow2
	I1001 16:54:16.161496    5363 main.go:141] libmachine: STDOUT: 
	I1001 16:54:16.161521    5363 main.go:141] libmachine: STDERR: 
	I1001 16:54:16.161578    5363 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/disk.qcow2 +20000M
	I1001 16:54:16.169943    5363 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:54:16.169957    5363 main.go:141] libmachine: STDERR: 
	I1001 16:54:16.169976    5363 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/disk.qcow2
	I1001 16:54:16.169982    5363 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:54:16.169994    5363 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:54:16.170018    5363 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:03:69:e9:73:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/disk.qcow2
	I1001 16:54:16.171745    5363 main.go:141] libmachine: STDOUT: 
	I1001 16:54:16.171763    5363 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:54:16.171783    5363 client.go:171] duration metric: took 375.074542ms to LocalClient.Create
	I1001 16:54:18.174038    5363 start.go:128] duration metric: took 2.402889875s to createHost
	I1001 16:54:18.174148    5363 start.go:83] releasing machines lock for "calico-870000", held for 2.403065959s
	W1001 16:54:18.174241    5363 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:18.181476    5363 out.go:177] * Deleting "calico-870000" in qemu2 ...
	W1001 16:54:18.213220    5363 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:18.213257    5363 start.go:729] Will try again in 5 seconds ...
	I1001 16:54:23.215370    5363 start.go:360] acquireMachinesLock for calico-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:54:23.215864    5363 start.go:364] duration metric: took 409.708µs to acquireMachinesLock for "calico-870000"
	I1001 16:54:23.215922    5363 start.go:93] Provisioning new machine with config: &{Name:calico-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:calico-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:54:23.216260    5363 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:54:23.228036    5363 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:54:23.280528    5363 start.go:159] libmachine.API.Create for "calico-870000" (driver="qemu2")
	I1001 16:54:23.280581    5363 client.go:168] LocalClient.Create starting
	I1001 16:54:23.280741    5363 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:54:23.280812    5363 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:23.280862    5363 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:23.280926    5363 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:54:23.280972    5363 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:23.280987    5363 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:23.281500    5363 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:54:23.452879    5363 main.go:141] libmachine: Creating SSH key...
	I1001 16:54:23.757954    5363 main.go:141] libmachine: Creating Disk image...
	I1001 16:54:23.757969    5363 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:54:23.758236    5363 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/disk.qcow2
	I1001 16:54:23.768210    5363 main.go:141] libmachine: STDOUT: 
	I1001 16:54:23.768228    5363 main.go:141] libmachine: STDERR: 
	I1001 16:54:23.768287    5363 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/disk.qcow2 +20000M
	I1001 16:54:23.776520    5363 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:54:23.776536    5363 main.go:141] libmachine: STDERR: 
	I1001 16:54:23.776552    5363 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/disk.qcow2
	I1001 16:54:23.776560    5363 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:54:23.776569    5363 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:54:23.776608    5363 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:0e:54:ab:db:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/calico-870000/disk.qcow2
	I1001 16:54:23.778275    5363 main.go:141] libmachine: STDOUT: 
	I1001 16:54:23.778289    5363 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:54:23.778302    5363 client.go:171] duration metric: took 497.721958ms to LocalClient.Create
	I1001 16:54:25.780655    5363 start.go:128] duration metric: took 2.56430725s to createHost
	I1001 16:54:25.780762    5363 start.go:83] releasing machines lock for "calico-870000", held for 2.564915459s
	W1001 16:54:25.781121    5363 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:25.790551    5363 out.go:201] 
	W1001 16:54:25.798696    5363 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:54:25.798734    5363 out.go:270] * 
	* 
	W1001 16:54:25.801502    5363 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:54:25.811557    5363 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (10.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (10.02744925s)

                                                
                                                
-- stdout --
	* [custom-flannel-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-870000" primary control-plane node in "custom-flannel-870000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-870000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:54:27.482439    5445 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:54:27.482548    5445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:54:27.482551    5445 out.go:358] Setting ErrFile to fd 2...
	I1001 16:54:27.482554    5445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:54:27.482663    5445 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:54:27.486442    5445 out.go:352] Setting JSON to false
	I1001 16:54:27.503374    5445 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5035,"bootTime":1727821832,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:54:27.503446    5445 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:54:27.507865    5445 out.go:177] * [custom-flannel-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:54:27.515008    5445 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:54:27.515120    5445 notify.go:220] Checking for updates...
	I1001 16:54:27.521930    5445 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:54:27.528957    5445 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:54:27.540967    5445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:54:27.551920    5445 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:54:27.554921    5445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:54:27.558292    5445 config.go:182] Loaded profile config "calico-870000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:54:27.558374    5445 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:54:27.558435    5445 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:54:27.565998    5445 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:54:27.572941    5445 start.go:297] selected driver: qemu2
	I1001 16:54:27.572955    5445 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:54:27.572965    5445 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:54:27.575442    5445 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:54:27.578908    5445 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:54:27.580179    5445 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:54:27.580197    5445 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1001 16:54:27.580205    5445 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1001 16:54:27.580244    5445 start.go:340] cluster config:
	{Name:custom-flannel-870000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:54:27.583514    5445 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:54:27.591967    5445 out.go:177] * Starting "custom-flannel-870000" primary control-plane node in "custom-flannel-870000" cluster
	I1001 16:54:27.596021    5445 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:54:27.596035    5445 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:54:27.596049    5445 cache.go:56] Caching tarball of preloaded images
	I1001 16:54:27.596113    5445 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:54:27.596118    5445 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:54:27.596191    5445 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/custom-flannel-870000/config.json ...
	I1001 16:54:27.596200    5445 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/custom-flannel-870000/config.json: {Name:mkbaadfeb1030c98d7bead9f8cd125ffefbe7a1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:54:27.596601    5445 start.go:360] acquireMachinesLock for custom-flannel-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:54:27.596631    5445 start.go:364] duration metric: took 23.833µs to acquireMachinesLock for "custom-flannel-870000"
	I1001 16:54:27.596641    5445 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:54:27.596665    5445 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:54:27.606879    5445 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:54:27.623921    5445 start.go:159] libmachine.API.Create for "custom-flannel-870000" (driver="qemu2")
	I1001 16:54:27.623947    5445 client.go:168] LocalClient.Create starting
	I1001 16:54:27.624011    5445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:54:27.624045    5445 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:27.624054    5445 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:27.624099    5445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:54:27.624126    5445 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:27.624132    5445 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:27.624606    5445 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:54:27.911598    5445 main.go:141] libmachine: Creating SSH key...
	I1001 16:54:28.026328    5445 main.go:141] libmachine: Creating Disk image...
	I1001 16:54:28.026334    5445 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:54:28.026543    5445 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/disk.qcow2
	I1001 16:54:28.038566    5445 main.go:141] libmachine: STDOUT: 
	I1001 16:54:28.038594    5445 main.go:141] libmachine: STDERR: 
	I1001 16:54:28.038680    5445 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/disk.qcow2 +20000M
	I1001 16:54:28.047044    5445 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:54:28.047066    5445 main.go:141] libmachine: STDERR: 
	I1001 16:54:28.047080    5445 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/disk.qcow2
	I1001 16:54:28.047084    5445 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:54:28.047099    5445 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:54:28.047135    5445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:e4:be:27:42:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/disk.qcow2
	I1001 16:54:28.049156    5445 main.go:141] libmachine: STDOUT: 
	I1001 16:54:28.049176    5445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:54:28.049196    5445 client.go:171] duration metric: took 425.250458ms to LocalClient.Create
	I1001 16:54:30.051398    5445 start.go:128] duration metric: took 2.454727458s to createHost
	I1001 16:54:30.051495    5445 start.go:83] releasing machines lock for "custom-flannel-870000", held for 2.454892292s
	W1001 16:54:30.051616    5445 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:30.066945    5445 out.go:177] * Deleting "custom-flannel-870000" in qemu2 ...
	W1001 16:54:30.096478    5445 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:30.096504    5445 start.go:729] Will try again in 5 seconds ...
	I1001 16:54:35.097631    5445 start.go:360] acquireMachinesLock for custom-flannel-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:54:35.097892    5445 start.go:364] duration metric: took 197.5µs to acquireMachinesLock for "custom-flannel-870000"
	I1001 16:54:35.097971    5445 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:54:35.098293    5445 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:54:35.101873    5445 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:54:35.148781    5445 start.go:159] libmachine.API.Create for "custom-flannel-870000" (driver="qemu2")
	I1001 16:54:35.148838    5445 client.go:168] LocalClient.Create starting
	I1001 16:54:35.148998    5445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:54:35.149075    5445 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:35.149099    5445 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:35.149181    5445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:54:35.149236    5445 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:35.149248    5445 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:35.149848    5445 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:54:35.328503    5445 main.go:141] libmachine: Creating SSH key...
	I1001 16:54:35.413791    5445 main.go:141] libmachine: Creating Disk image...
	I1001 16:54:35.413797    5445 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:54:35.414027    5445 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/disk.qcow2
	I1001 16:54:35.422968    5445 main.go:141] libmachine: STDOUT: 
	I1001 16:54:35.422990    5445 main.go:141] libmachine: STDERR: 
	I1001 16:54:35.423052    5445 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/disk.qcow2 +20000M
	I1001 16:54:35.431064    5445 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:54:35.431085    5445 main.go:141] libmachine: STDERR: 
	I1001 16:54:35.431097    5445 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/disk.qcow2
	I1001 16:54:35.431101    5445 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:54:35.431111    5445 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:54:35.431144    5445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:b4:c8:8e:1e:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/custom-flannel-870000/disk.qcow2
	I1001 16:54:35.432827    5445 main.go:141] libmachine: STDOUT: 
	I1001 16:54:35.432845    5445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:54:35.432858    5445 client.go:171] duration metric: took 284.017666ms to LocalClient.Create
	I1001 16:54:37.435010    5445 start.go:128] duration metric: took 2.336726208s to createHost
	I1001 16:54:37.435067    5445 start.go:83] releasing machines lock for "custom-flannel-870000", held for 2.337195792s
	W1001 16:54:37.435578    5445 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:37.449123    5445 out.go:201] 
	W1001 16:54:37.454364    5445 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:54:37.454397    5445 out.go:270] * 
	* 
	W1001 16:54:37.457002    5445 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:54:37.464184    5445 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (10.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (11.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (11.487041375s)

                                                
                                                
-- stdout --
	* [false-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-870000" primary control-plane node in "false-870000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-870000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:54:28.427355    5493 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:54:28.427498    5493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:54:28.427501    5493 out.go:358] Setting ErrFile to fd 2...
	I1001 16:54:28.427504    5493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:54:28.427629    5493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:54:28.428695    5493 out.go:352] Setting JSON to false
	I1001 16:54:28.444968    5493 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5036,"bootTime":1727821832,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:54:28.445045    5493 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:54:28.455983    5493 out.go:177] * [false-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:54:28.458931    5493 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:54:28.458991    5493 notify.go:220] Checking for updates...
	I1001 16:54:28.465955    5493 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:54:28.468945    5493 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:54:28.471987    5493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:54:28.474993    5493 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:54:28.477920    5493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:54:28.481308    5493 config.go:182] Loaded profile config "custom-flannel-870000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:54:28.481381    5493 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:54:28.481430    5493 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:54:28.486000    5493 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:54:28.492938    5493 start.go:297] selected driver: qemu2
	I1001 16:54:28.492944    5493 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:54:28.492950    5493 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:54:28.495258    5493 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:54:28.497942    5493 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:54:28.499391    5493 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:54:28.499413    5493 cni.go:84] Creating CNI manager for "false"
	I1001 16:54:28.499447    5493 start.go:340] cluster config:
	{Name:false-870000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet
_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:54:28.503071    5493 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:54:28.510977    5493 out.go:177] * Starting "false-870000" primary control-plane node in "false-870000" cluster
	I1001 16:54:28.514909    5493 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:54:28.514925    5493 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:54:28.514934    5493 cache.go:56] Caching tarball of preloaded images
	I1001 16:54:28.515012    5493 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:54:28.515019    5493 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:54:28.515086    5493 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/false-870000/config.json ...
	I1001 16:54:28.515098    5493 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/false-870000/config.json: {Name:mkbd81d826e000e74d76522df71d6faa5a7248f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:54:28.515462    5493 start.go:360] acquireMachinesLock for false-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:54:30.051684    5493 start.go:364] duration metric: took 1.536221666s to acquireMachinesLock for "false-870000"
	I1001 16:54:30.051781    5493 start.go:93] Provisioning new machine with config: &{Name:false-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:false-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:54:30.051994    5493 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:54:30.061554    5493 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:54:30.111723    5493 start.go:159] libmachine.API.Create for "false-870000" (driver="qemu2")
	I1001 16:54:30.111813    5493 client.go:168] LocalClient.Create starting
	I1001 16:54:30.111994    5493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:54:30.112075    5493 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:30.112096    5493 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:30.112182    5493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:54:30.112237    5493 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:30.112254    5493 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:30.112901    5493 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:54:30.287906    5493 main.go:141] libmachine: Creating SSH key...
	I1001 16:54:30.451536    5493 main.go:141] libmachine: Creating Disk image...
	I1001 16:54:30.451542    5493 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:54:30.451818    5493 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/disk.qcow2
	I1001 16:54:30.461706    5493 main.go:141] libmachine: STDOUT: 
	I1001 16:54:30.461722    5493 main.go:141] libmachine: STDERR: 
	I1001 16:54:30.461791    5493 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/disk.qcow2 +20000M
	I1001 16:54:30.469715    5493 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:54:30.469807    5493 main.go:141] libmachine: STDERR: 
	I1001 16:54:30.469824    5493 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/disk.qcow2
	I1001 16:54:30.469828    5493 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:54:30.469840    5493 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:54:30.469870    5493 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:dc:e5:e1:a2:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/disk.qcow2
	I1001 16:54:30.471649    5493 main.go:141] libmachine: STDOUT: 
	I1001 16:54:30.471666    5493 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:54:30.471686    5493 client.go:171] duration metric: took 359.858917ms to LocalClient.Create
	I1001 16:54:32.473831    5493 start.go:128] duration metric: took 2.421825083s to createHost
	I1001 16:54:32.473905    5493 start.go:83] releasing machines lock for "false-870000", held for 2.422225792s
	W1001 16:54:32.473964    5493 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:32.487003    5493 out.go:177] * Deleting "false-870000" in qemu2 ...
	W1001 16:54:32.525579    5493 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:32.525605    5493 start.go:729] Will try again in 5 seconds ...
	I1001 16:54:37.527674    5493 start.go:360] acquireMachinesLock for false-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:54:37.527862    5493 start.go:364] duration metric: took 144.5µs to acquireMachinesLock for "false-870000"
	I1001 16:54:37.527914    5493 start.go:93] Provisioning new machine with config: &{Name:false-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:false-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:54:37.527973    5493 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:54:37.537132    5493 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:54:37.558002    5493 start.go:159] libmachine.API.Create for "false-870000" (driver="qemu2")
	I1001 16:54:37.558038    5493 client.go:168] LocalClient.Create starting
	I1001 16:54:37.558106    5493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:54:37.558132    5493 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:37.558139    5493 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:37.558178    5493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:54:37.558195    5493 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:37.558204    5493 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:37.558590    5493 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:54:37.752647    5493 main.go:141] libmachine: Creating SSH key...
	I1001 16:54:37.830227    5493 main.go:141] libmachine: Creating Disk image...
	I1001 16:54:37.830238    5493 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:54:37.830500    5493 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/disk.qcow2
	I1001 16:54:37.840481    5493 main.go:141] libmachine: STDOUT: 
	I1001 16:54:37.840503    5493 main.go:141] libmachine: STDERR: 
	I1001 16:54:37.840582    5493 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/disk.qcow2 +20000M
	I1001 16:54:37.849186    5493 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:54:37.849210    5493 main.go:141] libmachine: STDERR: 
	I1001 16:54:37.849223    5493 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/disk.qcow2
	I1001 16:54:37.849228    5493 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:54:37.849239    5493 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:54:37.849261    5493 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:03:90:57:f2:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/false-870000/disk.qcow2
	I1001 16:54:37.850877    5493 main.go:141] libmachine: STDOUT: 
	I1001 16:54:37.850901    5493 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:54:37.850914    5493 client.go:171] duration metric: took 292.87725ms to LocalClient.Create
	I1001 16:54:39.851815    5493 start.go:128] duration metric: took 2.323868667s to createHost
	I1001 16:54:39.851832    5493 start.go:83] releasing machines lock for "false-870000", held for 2.323998167s
	W1001 16:54:39.851931    5493 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:39.863860    5493 out.go:201] 
	W1001 16:54:39.866856    5493 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:54:39.866864    5493 out.go:270] * 
	* 
	W1001 16:54:39.867324    5493 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:54:39.874836    5493 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (11.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.938533292s)

                                                
                                                
-- stdout --
	* [enable-default-cni-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-870000" primary control-plane node in "enable-default-cni-870000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-870000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:54:39.883394    5617 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:54:39.883542    5617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:54:39.883546    5617 out.go:358] Setting ErrFile to fd 2...
	I1001 16:54:39.883549    5617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:54:39.883668    5617 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:54:39.887003    5617 out.go:352] Setting JSON to false
	I1001 16:54:39.905189    5617 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5047,"bootTime":1727821832,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:54:39.905271    5617 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:54:39.909839    5617 out.go:177] * [enable-default-cni-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:54:39.917886    5617 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:54:39.917913    5617 notify.go:220] Checking for updates...
	I1001 16:54:39.925822    5617 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:54:39.928834    5617 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:54:39.931814    5617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:54:39.934822    5617 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:54:39.935980    5617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:54:39.939177    5617 config.go:182] Loaded profile config "false-870000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:54:39.939247    5617 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:54:39.939299    5617 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:54:39.942792    5617 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:54:39.947837    5617 start.go:297] selected driver: qemu2
	I1001 16:54:39.947844    5617 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:54:39.947852    5617 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:54:39.950163    5617 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:54:39.953801    5617 out.go:177] * Automatically selected the socket_vmnet network
	E1001 16:54:39.956889    5617 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1001 16:54:39.956902    5617 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:54:39.956919    5617 cni.go:84] Creating CNI manager for "bridge"
	I1001 16:54:39.956924    5617 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 16:54:39.956960    5617 start.go:340] cluster config:
	{Name:enable-default-cni-870000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt
/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:54:39.960833    5617 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:54:39.967862    5617 out.go:177] * Starting "enable-default-cni-870000" primary control-plane node in "enable-default-cni-870000" cluster
	I1001 16:54:39.971835    5617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:54:39.971866    5617 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:54:39.971878    5617 cache.go:56] Caching tarball of preloaded images
	I1001 16:54:39.971953    5617 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:54:39.971958    5617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:54:39.972021    5617 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/enable-default-cni-870000/config.json ...
	I1001 16:54:39.972031    5617 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/enable-default-cni-870000/config.json: {Name:mk5c03eee6be1407a29deb3477bcf296627ce349 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:54:39.972318    5617 start.go:360] acquireMachinesLock for enable-default-cni-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:54:39.972351    5617 start.go:364] duration metric: took 25.167µs to acquireMachinesLock for "enable-default-cni-870000"
	I1001 16:54:39.972363    5617 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:54:39.972409    5617 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:54:39.975832    5617 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:54:39.991932    5617 start.go:159] libmachine.API.Create for "enable-default-cni-870000" (driver="qemu2")
	I1001 16:54:39.991965    5617 client.go:168] LocalClient.Create starting
	I1001 16:54:39.992052    5617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:54:39.992091    5617 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:39.992100    5617 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:39.992143    5617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:54:39.992164    5617 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:39.992172    5617 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:39.992533    5617 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:54:40.155033    5617 main.go:141] libmachine: Creating SSH key...
	I1001 16:54:40.311032    5617 main.go:141] libmachine: Creating Disk image...
	I1001 16:54:40.311041    5617 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:54:40.311229    5617 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/disk.qcow2
	I1001 16:54:40.320772    5617 main.go:141] libmachine: STDOUT: 
	I1001 16:54:40.320802    5617 main.go:141] libmachine: STDERR: 
	I1001 16:54:40.320884    5617 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/disk.qcow2 +20000M
	I1001 16:54:40.329842    5617 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:54:40.329869    5617 main.go:141] libmachine: STDERR: 
	I1001 16:54:40.329881    5617 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/disk.qcow2
	I1001 16:54:40.329889    5617 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:54:40.329900    5617 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:54:40.329934    5617 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:4b:e5:f7:b1:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/disk.qcow2
	I1001 16:54:40.331842    5617 main.go:141] libmachine: STDOUT: 
	I1001 16:54:40.331867    5617 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:54:40.331891    5617 client.go:171] duration metric: took 339.925583ms to LocalClient.Create
	I1001 16:54:42.334105    5617 start.go:128] duration metric: took 2.361704875s to createHost
	I1001 16:54:42.334200    5617 start.go:83] releasing machines lock for "enable-default-cni-870000", held for 2.361876834s
	W1001 16:54:42.334314    5617 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:42.352259    5617 out.go:177] * Deleting "enable-default-cni-870000" in qemu2 ...
	W1001 16:54:42.376954    5617 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:42.376979    5617 start.go:729] Will try again in 5 seconds ...
	I1001 16:54:47.379116    5617 start.go:360] acquireMachinesLock for enable-default-cni-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:54:47.379702    5617 start.go:364] duration metric: took 433µs to acquireMachinesLock for "enable-default-cni-870000"
	I1001 16:54:47.379841    5617 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:54:47.380139    5617 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:54:47.384839    5617 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:54:47.436899    5617 start.go:159] libmachine.API.Create for "enable-default-cni-870000" (driver="qemu2")
	I1001 16:54:47.436947    5617 client.go:168] LocalClient.Create starting
	I1001 16:54:47.437055    5617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:54:47.437131    5617 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:47.437148    5617 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:47.437213    5617 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:54:47.437256    5617 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:47.437271    5617 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:47.437812    5617 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:54:47.609433    5617 main.go:141] libmachine: Creating SSH key...
	I1001 16:54:47.719201    5617 main.go:141] libmachine: Creating Disk image...
	I1001 16:54:47.719211    5617 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:54:47.719487    5617 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/disk.qcow2
	I1001 16:54:47.728936    5617 main.go:141] libmachine: STDOUT: 
	I1001 16:54:47.728976    5617 main.go:141] libmachine: STDERR: 
	I1001 16:54:47.729040    5617 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/disk.qcow2 +20000M
	I1001 16:54:47.736878    5617 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:54:47.736893    5617 main.go:141] libmachine: STDERR: 
	I1001 16:54:47.736909    5617 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/disk.qcow2
	I1001 16:54:47.736913    5617 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:54:47.736922    5617 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:54:47.736968    5617 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:2e:f5:ed:2a:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/enable-default-cni-870000/disk.qcow2
	I1001 16:54:47.738601    5617 main.go:141] libmachine: STDOUT: 
	I1001 16:54:47.738616    5617 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:54:47.738634    5617 client.go:171] duration metric: took 301.687125ms to LocalClient.Create
	I1001 16:54:49.740763    5617 start.go:128] duration metric: took 2.360638125s to createHost
	I1001 16:54:49.740815    5617 start.go:83] releasing machines lock for "enable-default-cni-870000", held for 2.361117333s
	W1001 16:54:49.741157    5617 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:49.759617    5617 out.go:201] 
	W1001 16:54:49.767955    5617 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:54:49.768015    5617 out.go:270] * 
	* 
	W1001 16:54:49.770802    5617 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:54:49.780812    5617 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (10.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (10.083318417s)

                                                
                                                
-- stdout --
	* [flannel-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-870000" primary control-plane node in "flannel-870000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-870000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:54:42.045794    5722 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:54:42.045922    5722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:54:42.045926    5722 out.go:358] Setting ErrFile to fd 2...
	I1001 16:54:42.045929    5722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:54:42.046077    5722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:54:42.047124    5722 out.go:352] Setting JSON to false
	I1001 16:54:42.063254    5722 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5050,"bootTime":1727821832,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:54:42.063324    5722 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:54:42.070602    5722 out.go:177] * [flannel-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:54:42.078448    5722 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:54:42.078501    5722 notify.go:220] Checking for updates...
	I1001 16:54:42.087431    5722 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:54:42.090434    5722 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:54:42.093339    5722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:54:42.096451    5722 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:54:42.099434    5722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:54:42.101255    5722 config.go:182] Loaded profile config "enable-default-cni-870000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:54:42.101328    5722 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:54:42.101377    5722 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:54:42.105472    5722 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:54:42.112294    5722 start.go:297] selected driver: qemu2
	I1001 16:54:42.112301    5722 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:54:42.112307    5722 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:54:42.114441    5722 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:54:42.117419    5722 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:54:42.120568    5722 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:54:42.120599    5722 cni.go:84] Creating CNI manager for "flannel"
	I1001 16:54:42.120607    5722 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1001 16:54:42.120642    5722 start.go:340] cluster config:
	{Name:flannel-870000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/soc
ket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:54:42.124432    5722 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:54:42.133406    5722 out.go:177] * Starting "flannel-870000" primary control-plane node in "flannel-870000" cluster
	I1001 16:54:42.137434    5722 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:54:42.137451    5722 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:54:42.137463    5722 cache.go:56] Caching tarball of preloaded images
	I1001 16:54:42.137570    5722 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:54:42.137577    5722 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:54:42.137649    5722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/flannel-870000/config.json ...
	I1001 16:54:42.137665    5722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/flannel-870000/config.json: {Name:mkce99097eb17eeb1a50d63e845f372da6febd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:54:42.137900    5722 start.go:360] acquireMachinesLock for flannel-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:54:42.334369    5722 start.go:364] duration metric: took 196.435625ms to acquireMachinesLock for "flannel-870000"
	I1001 16:54:42.334476    5722 start.go:93] Provisioning new machine with config: &{Name:flannel-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:flannel-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:54:42.334733    5722 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:54:42.344272    5722 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:54:42.393701    5722 start.go:159] libmachine.API.Create for "flannel-870000" (driver="qemu2")
	I1001 16:54:42.393744    5722 client.go:168] LocalClient.Create starting
	I1001 16:54:42.393858    5722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:54:42.393917    5722 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:42.393934    5722 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:42.394002    5722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:54:42.394048    5722 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:42.394063    5722 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:42.394732    5722 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:54:42.574353    5722 main.go:141] libmachine: Creating SSH key...
	I1001 16:54:42.620101    5722 main.go:141] libmachine: Creating Disk image...
	I1001 16:54:42.620107    5722 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:54:42.620341    5722 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/disk.qcow2
	I1001 16:54:42.629521    5722 main.go:141] libmachine: STDOUT: 
	I1001 16:54:42.629537    5722 main.go:141] libmachine: STDERR: 
	I1001 16:54:42.629596    5722 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/disk.qcow2 +20000M
	I1001 16:54:42.637388    5722 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:54:42.637403    5722 main.go:141] libmachine: STDERR: 
	I1001 16:54:42.637422    5722 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/disk.qcow2
	I1001 16:54:42.637427    5722 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:54:42.637438    5722 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:54:42.637461    5722 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:7b:08:41:6d:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/disk.qcow2
	I1001 16:54:42.639134    5722 main.go:141] libmachine: STDOUT: 
	I1001 16:54:42.639147    5722 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:54:42.639170    5722 client.go:171] duration metric: took 245.422917ms to LocalClient.Create
	I1001 16:54:44.641370    5722 start.go:128] duration metric: took 2.306636375s to createHost
	I1001 16:54:44.641443    5722 start.go:83] releasing machines lock for "flannel-870000", held for 2.307066833s
	W1001 16:54:44.641503    5722 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:44.656546    5722 out.go:177] * Deleting "flannel-870000" in qemu2 ...
	W1001 16:54:44.695933    5722 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:44.695950    5722 start.go:729] Will try again in 5 seconds ...
	I1001 16:54:49.698105    5722 start.go:360] acquireMachinesLock for flannel-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:54:49.740992    5722 start.go:364] duration metric: took 42.705125ms to acquireMachinesLock for "flannel-870000"
	I1001 16:54:49.741185    5722 start.go:93] Provisioning new machine with config: &{Name:flannel-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:flannel-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:54:49.741479    5722 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:54:49.757759    5722 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:54:49.809527    5722 start.go:159] libmachine.API.Create for "flannel-870000" (driver="qemu2")
	I1001 16:54:49.809577    5722 client.go:168] LocalClient.Create starting
	I1001 16:54:49.809681    5722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:54:49.809729    5722 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:49.809747    5722 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:49.809828    5722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:54:49.809857    5722 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:49.809882    5722 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:49.810410    5722 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:54:49.987449    5722 main.go:141] libmachine: Creating SSH key...
	I1001 16:54:50.038015    5722 main.go:141] libmachine: Creating Disk image...
	I1001 16:54:50.038024    5722 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:54:50.038257    5722 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/disk.qcow2
	I1001 16:54:50.048006    5722 main.go:141] libmachine: STDOUT: 
	I1001 16:54:50.048032    5722 main.go:141] libmachine: STDERR: 
	I1001 16:54:50.048107    5722 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/disk.qcow2 +20000M
	I1001 16:54:50.057141    5722 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:54:50.057167    5722 main.go:141] libmachine: STDERR: 
	I1001 16:54:50.057192    5722 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/disk.qcow2
	I1001 16:54:50.057197    5722 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:54:50.057203    5722 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:54:50.057240    5722 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:73:d0:fa:3f:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/flannel-870000/disk.qcow2
	I1001 16:54:50.059096    5722 main.go:141] libmachine: STDOUT: 
	I1001 16:54:50.059112    5722 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:54:50.059126    5722 client.go:171] duration metric: took 249.548ms to LocalClient.Create
	I1001 16:54:52.060477    5722 start.go:128] duration metric: took 2.319008s to createHost
	I1001 16:54:52.060488    5722 start.go:83] releasing machines lock for "flannel-870000", held for 2.319486875s
	W1001 16:54:52.060567    5722 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:52.070455    5722 out.go:201] 
	W1001 16:54:52.082410    5722 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:54:52.082420    5722 out.go:270] * 
	* 
	W1001 16:54:52.083110    5722 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:54:52.093424    5722 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (10.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.887925084s)

                                                
                                                
-- stdout --
	* [bridge-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-870000" primary control-plane node in "bridge-870000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-870000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:54:51.953616    5837 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:54:51.953754    5837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:54:51.953759    5837 out.go:358] Setting ErrFile to fd 2...
	I1001 16:54:51.953762    5837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:54:51.953897    5837 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:54:51.954969    5837 out.go:352] Setting JSON to false
	I1001 16:54:51.971663    5837 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5059,"bootTime":1727821832,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:54:51.971735    5837 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:54:51.977633    5837 out.go:177] * [bridge-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:54:51.984409    5837 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:54:51.984489    5837 notify.go:220] Checking for updates...
	I1001 16:54:51.992450    5837 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:54:51.995456    5837 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:54:51.998442    5837 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:54:52.014470    5837 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:54:52.017462    5837 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:54:52.020849    5837 config.go:182] Loaded profile config "flannel-870000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:54:52.020921    5837 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:54:52.020969    5837 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:54:52.025426    5837 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:54:52.032457    5837 start.go:297] selected driver: qemu2
	I1001 16:54:52.032464    5837 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:54:52.032472    5837 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:54:52.034954    5837 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:54:52.038409    5837 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:54:52.041546    5837 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:54:52.041570    5837 cni.go:84] Creating CNI manager for "bridge"
	I1001 16:54:52.041574    5837 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 16:54:52.041614    5837 start.go:340] cluster config:
	{Name:bridge-870000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:54:52.045894    5837 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:54:52.056559    5837 out.go:177] * Starting "bridge-870000" primary control-plane node in "bridge-870000" cluster
	I1001 16:54:52.060385    5837 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:54:52.060405    5837 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:54:52.060413    5837 cache.go:56] Caching tarball of preloaded images
	I1001 16:54:52.060516    5837 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:54:52.060522    5837 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:54:52.060599    5837 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/bridge-870000/config.json ...
	I1001 16:54:52.060612    5837 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/bridge-870000/config.json: {Name:mkaf0fff0e34f8b002aa957ef35b1d02e83ba051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:54:52.066049    5837 start.go:360] acquireMachinesLock for bridge-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:54:52.066096    5837 start.go:364] duration metric: took 38.625µs to acquireMachinesLock for "bridge-870000"
	I1001 16:54:52.066111    5837 start.go:93] Provisioning new machine with config: &{Name:bridge-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:bridge-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:54:52.066153    5837 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:54:52.079406    5837 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:54:52.100729    5837 start.go:159] libmachine.API.Create for "bridge-870000" (driver="qemu2")
	I1001 16:54:52.100780    5837 client.go:168] LocalClient.Create starting
	I1001 16:54:52.100855    5837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:54:52.100889    5837 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:52.100900    5837 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:52.100948    5837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:54:52.100974    5837 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:52.100985    5837 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:52.101399    5837 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:54:52.264238    5837 main.go:141] libmachine: Creating SSH key...
	I1001 16:54:52.351108    5837 main.go:141] libmachine: Creating Disk image...
	I1001 16:54:52.351121    5837 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:54:52.351376    5837 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/disk.qcow2
	I1001 16:54:52.361017    5837 main.go:141] libmachine: STDOUT: 
	I1001 16:54:52.361038    5837 main.go:141] libmachine: STDERR: 
	I1001 16:54:52.361119    5837 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/disk.qcow2 +20000M
	I1001 16:54:52.369934    5837 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:54:52.369955    5837 main.go:141] libmachine: STDERR: 
	I1001 16:54:52.369974    5837 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/disk.qcow2
	I1001 16:54:52.369992    5837 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:54:52.370004    5837 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:54:52.370045    5837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:4a:b7:c4:bc:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/disk.qcow2
	I1001 16:54:52.371983    5837 main.go:141] libmachine: STDOUT: 
	I1001 16:54:52.371995    5837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:54:52.372012    5837 client.go:171] duration metric: took 271.230208ms to LocalClient.Create
	I1001 16:54:54.374055    5837 start.go:128] duration metric: took 2.307931584s to createHost
	I1001 16:54:54.374072    5837 start.go:83] releasing machines lock for "bridge-870000", held for 2.308007167s
	W1001 16:54:54.374083    5837 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:54.384282    5837 out.go:177] * Deleting "bridge-870000" in qemu2 ...
	W1001 16:54:54.399018    5837 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:54.399027    5837 start.go:729] Will try again in 5 seconds ...
	I1001 16:54:59.401140    5837 start.go:360] acquireMachinesLock for bridge-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:54:59.401563    5837 start.go:364] duration metric: took 340.042µs to acquireMachinesLock for "bridge-870000"
	I1001 16:54:59.401706    5837 start.go:93] Provisioning new machine with config: &{Name:bridge-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:bridge-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:54:59.402031    5837 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:54:59.407613    5837 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:54:59.458277    5837 start.go:159] libmachine.API.Create for "bridge-870000" (driver="qemu2")
	I1001 16:54:59.458330    5837 client.go:168] LocalClient.Create starting
	I1001 16:54:59.458438    5837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:54:59.458494    5837 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:59.458512    5837 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:59.458583    5837 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:54:59.458628    5837 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:59.458639    5837 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:59.460192    5837 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:54:59.637247    5837 main.go:141] libmachine: Creating SSH key...
	I1001 16:54:59.739577    5837 main.go:141] libmachine: Creating Disk image...
	I1001 16:54:59.739583    5837 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:54:59.739829    5837 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/disk.qcow2
	I1001 16:54:59.749194    5837 main.go:141] libmachine: STDOUT: 
	I1001 16:54:59.749213    5837 main.go:141] libmachine: STDERR: 
	I1001 16:54:59.749261    5837 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/disk.qcow2 +20000M
	I1001 16:54:59.757033    5837 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:54:59.757047    5837 main.go:141] libmachine: STDERR: 
	I1001 16:54:59.757061    5837 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/disk.qcow2
	I1001 16:54:59.757066    5837 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:54:59.757074    5837 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:54:59.757100    5837 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:7d:2a:d0:7b:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/bridge-870000/disk.qcow2
	I1001 16:54:59.758678    5837 main.go:141] libmachine: STDOUT: 
	I1001 16:54:59.758692    5837 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:54:59.758703    5837 client.go:171] duration metric: took 300.370833ms to LocalClient.Create
	I1001 16:55:01.760840    5837 start.go:128] duration metric: took 2.35881775s to createHost
	I1001 16:55:01.760906    5837 start.go:83] releasing machines lock for "bridge-870000", held for 2.359349625s
	W1001 16:55:01.761277    5837 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:01.777897    5837 out.go:201] 
	W1001 16:55:01.783038    5837 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:55:01.783064    5837 out.go:270] * 
	* 
	W1001 16:55:01.785420    5837 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:55:01.798946    5837 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-870000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.827731708s)

                                                
                                                
-- stdout --
	* [kubenet-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-870000" primary control-plane node in "kubenet-870000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-870000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:54:54.450560    5950 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:54:54.450692    5950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:54:54.450696    5950 out.go:358] Setting ErrFile to fd 2...
	I1001 16:54:54.450698    5950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:54:54.450829    5950 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:54:54.451879    5950 out.go:352] Setting JSON to false
	I1001 16:54:54.467819    5950 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5062,"bootTime":1727821832,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:54:54.467895    5950 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:54:54.473560    5950 out.go:177] * [kubenet-870000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:54:54.482368    5950 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:54:54.482419    5950 notify.go:220] Checking for updates...
	I1001 16:54:54.489346    5950 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:54:54.492374    5950 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:54:54.495302    5950 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:54:54.498330    5950 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:54:54.501357    5950 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:54:54.503139    5950 config.go:182] Loaded profile config "bridge-870000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:54:54.503211    5950 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:54:54.503263    5950 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:54:54.507305    5950 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:54:54.514163    5950 start.go:297] selected driver: qemu2
	I1001 16:54:54.514170    5950 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:54:54.514176    5950 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:54:54.516305    5950 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:54:54.519330    5950 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:54:54.522418    5950 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:54:54.522440    5950 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1001 16:54:54.522475    5950 start.go:340] cluster config:
	{Name:kubenet-870000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:54:54.526065    5950 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:54:54.533332    5950 out.go:177] * Starting "kubenet-870000" primary control-plane node in "kubenet-870000" cluster
	I1001 16:54:54.537326    5950 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:54:54.537342    5950 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:54:54.537354    5950 cache.go:56] Caching tarball of preloaded images
	I1001 16:54:54.537427    5950 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:54:54.537437    5950 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:54:54.537501    5950 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/kubenet-870000/config.json ...
	I1001 16:54:54.537513    5950 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/kubenet-870000/config.json: {Name:mkc185b712b06562c62fcd40c636e9dc577fa776 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:54:54.537741    5950 start.go:360] acquireMachinesLock for kubenet-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:54:54.537776    5950 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "kubenet-870000"
	I1001 16:54:54.537791    5950 start.go:93] Provisioning new machine with config: &{Name:kubenet-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kubenet-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:54:54.537833    5950 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:54:54.542292    5950 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:54:54.560465    5950 start.go:159] libmachine.API.Create for "kubenet-870000" (driver="qemu2")
	I1001 16:54:54.560503    5950 client.go:168] LocalClient.Create starting
	I1001 16:54:54.560556    5950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:54:54.560586    5950 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:54.560596    5950 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:54.560636    5950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:54:54.560659    5950 main.go:141] libmachine: Decoding PEM data...
	I1001 16:54:54.560669    5950 main.go:141] libmachine: Parsing certificate...
	I1001 16:54:54.561030    5950 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:54:54.721328    5950 main.go:141] libmachine: Creating SSH key...
	I1001 16:54:54.782426    5950 main.go:141] libmachine: Creating Disk image...
	I1001 16:54:54.782433    5950 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:54:54.782675    5950 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/disk.qcow2
	I1001 16:54:54.791635    5950 main.go:141] libmachine: STDOUT: 
	I1001 16:54:54.791652    5950 main.go:141] libmachine: STDERR: 
	I1001 16:54:54.791711    5950 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/disk.qcow2 +20000M
	I1001 16:54:54.799497    5950 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:54:54.799519    5950 main.go:141] libmachine: STDERR: 
	I1001 16:54:54.799537    5950 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/disk.qcow2
	I1001 16:54:54.799542    5950 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:54:54.799554    5950 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:54:54.799584    5950 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:ce:96:e7:f0:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/disk.qcow2
	I1001 16:54:54.801264    5950 main.go:141] libmachine: STDOUT: 
	I1001 16:54:54.801279    5950 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:54:54.801300    5950 client.go:171] duration metric: took 240.794334ms to LocalClient.Create
	I1001 16:54:56.803523    5950 start.go:128] duration metric: took 2.265678875s to createHost
	I1001 16:54:56.803597    5950 start.go:83] releasing machines lock for "kubenet-870000", held for 2.265842709s
	W1001 16:54:56.803664    5950 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:56.816601    5950 out.go:177] * Deleting "kubenet-870000" in qemu2 ...
	W1001 16:54:56.849955    5950 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:54:56.849983    5950 start.go:729] Will try again in 5 seconds ...
	I1001 16:55:01.852030    5950 start.go:360] acquireMachinesLock for kubenet-870000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:01.852193    5950 start.go:364] duration metric: took 124.542µs to acquireMachinesLock for "kubenet-870000"
	I1001 16:55:01.852252    5950 start.go:93] Provisioning new machine with config: &{Name:kubenet-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kubenet-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:55:01.852329    5950 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:55:01.860908    5950 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1001 16:55:01.884901    5950 start.go:159] libmachine.API.Create for "kubenet-870000" (driver="qemu2")
	I1001 16:55:01.884943    5950 client.go:168] LocalClient.Create starting
	I1001 16:55:01.885040    5950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:55:01.885073    5950 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:01.885092    5950 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:01.885130    5950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:55:01.885154    5950 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:01.885162    5950 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:01.885503    5950 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:55:02.081435    5950 main.go:141] libmachine: Creating SSH key...
	I1001 16:55:02.193086    5950 main.go:141] libmachine: Creating Disk image...
	I1001 16:55:02.193095    5950 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:55:02.193277    5950 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/disk.qcow2
	I1001 16:55:02.202940    5950 main.go:141] libmachine: STDOUT: 
	I1001 16:55:02.202962    5950 main.go:141] libmachine: STDERR: 
	I1001 16:55:02.203042    5950 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/disk.qcow2 +20000M
	I1001 16:55:02.212065    5950 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:55:02.212089    5950 main.go:141] libmachine: STDERR: 
	I1001 16:55:02.212101    5950 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/disk.qcow2
	I1001 16:55:02.212108    5950 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:55:02.212116    5950 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:02.212146    5950 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:36:a9:42:a4:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/kubenet-870000/disk.qcow2
	I1001 16:55:02.214322    5950 main.go:141] libmachine: STDOUT: 
	I1001 16:55:02.214342    5950 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:02.214368    5950 client.go:171] duration metric: took 329.425083ms to LocalClient.Create
	I1001 16:55:04.216443    5950 start.go:128] duration metric: took 2.364135666s to createHost
	I1001 16:55:04.216513    5950 start.go:83] releasing machines lock for "kubenet-870000", held for 2.364316958s
	W1001 16:55:04.216686    5950 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:04.226208    5950 out.go:201] 
	W1001 16:55:04.229984    5950 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:55:04.230015    5950 out.go:270] * 
	* 
	W1001 16:55:04.231444    5950 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:55:04.241966    5950 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-663000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-663000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.002539833s)

                                                
                                                
-- stdout --
	* [old-k8s-version-663000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-663000" primary control-plane node in "old-k8s-version-663000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-663000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:55:04.015747    6063 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:55:04.015901    6063 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:04.015904    6063 out.go:358] Setting ErrFile to fd 2...
	I1001 16:55:04.015907    6063 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:04.016036    6063 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:55:04.017048    6063 out.go:352] Setting JSON to false
	I1001 16:55:04.033090    6063 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5072,"bootTime":1727821832,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:55:04.033153    6063 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:55:04.038915    6063 out.go:177] * [old-k8s-version-663000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:55:04.045893    6063 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:55:04.045947    6063 notify.go:220] Checking for updates...
	I1001 16:55:04.053848    6063 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:55:04.056764    6063 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:55:04.060788    6063 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:55:04.063849    6063 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:55:04.066825    6063 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:55:04.070270    6063 config.go:182] Loaded profile config "kubenet-870000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:55:04.070335    6063 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:55:04.070387    6063 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:55:04.073845    6063 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:55:04.080794    6063 start.go:297] selected driver: qemu2
	I1001 16:55:04.080800    6063 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:55:04.080805    6063 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:55:04.082914    6063 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:55:04.085833    6063 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:55:04.089853    6063 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:55:04.089871    6063 cni.go:84] Creating CNI manager for ""
	I1001 16:55:04.089892    6063 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1001 16:55:04.089919    6063 start.go:340] cluster config:
	{Name:old-k8s-version-663000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-663000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin
/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:55:04.093717    6063 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:04.098850    6063 out.go:177] * Starting "old-k8s-version-663000" primary control-plane node in "old-k8s-version-663000" cluster
	I1001 16:55:04.106850    6063 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 16:55:04.106867    6063 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1001 16:55:04.106884    6063 cache.go:56] Caching tarball of preloaded images
	I1001 16:55:04.106960    6063 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:55:04.106966    6063 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1001 16:55:04.107028    6063 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/old-k8s-version-663000/config.json ...
	I1001 16:55:04.107039    6063 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/old-k8s-version-663000/config.json: {Name:mk5aad2b5bfa8cca0e5a8a0affa90187ca72ff0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:55:04.107265    6063 start.go:360] acquireMachinesLock for old-k8s-version-663000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:04.216610    6063 start.go:364] duration metric: took 109.314583ms to acquireMachinesLock for "old-k8s-version-663000"
	I1001 16:55:04.216660    6063 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-663000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-663000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:55:04.216773    6063 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:55:04.224996    6063 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 16:55:04.263640    6063 start.go:159] libmachine.API.Create for "old-k8s-version-663000" (driver="qemu2")
	I1001 16:55:04.263695    6063 client.go:168] LocalClient.Create starting
	I1001 16:55:04.263813    6063 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:55:04.263871    6063 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:04.263888    6063 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:04.263960    6063 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:55:04.263999    6063 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:04.264015    6063 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:04.264536    6063 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:55:04.441307    6063 main.go:141] libmachine: Creating SSH key...
	I1001 16:55:04.548799    6063 main.go:141] libmachine: Creating Disk image...
	I1001 16:55:04.548810    6063 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:55:04.549085    6063 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/disk.qcow2
	I1001 16:55:04.559060    6063 main.go:141] libmachine: STDOUT: 
	I1001 16:55:04.559095    6063 main.go:141] libmachine: STDERR: 
	I1001 16:55:04.559173    6063 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/disk.qcow2 +20000M
	I1001 16:55:04.567575    6063 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:55:04.567597    6063 main.go:141] libmachine: STDERR: 
	I1001 16:55:04.567625    6063 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/disk.qcow2
	I1001 16:55:04.567630    6063 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:55:04.567649    6063 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:04.567679    6063 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:e4:c9:70:ab:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/disk.qcow2
	I1001 16:55:04.569528    6063 main.go:141] libmachine: STDOUT: 
	I1001 16:55:04.569545    6063 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:04.569564    6063 client.go:171] duration metric: took 305.867667ms to LocalClient.Create
	I1001 16:55:06.571621    6063 start.go:128] duration metric: took 2.354875208s to createHost
	I1001 16:55:06.571635    6063 start.go:83] releasing machines lock for "old-k8s-version-663000", held for 2.355045208s
	W1001 16:55:06.571654    6063 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:06.585740    6063 out.go:177] * Deleting "old-k8s-version-663000" in qemu2 ...
	W1001 16:55:06.597836    6063 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:06.597851    6063 start.go:729] Will try again in 5 seconds ...
	I1001 16:55:11.598079    6063 start.go:360] acquireMachinesLock for old-k8s-version-663000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:11.598509    6063 start.go:364] duration metric: took 333.958µs to acquireMachinesLock for "old-k8s-version-663000"
	I1001 16:55:11.598644    6063 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-663000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-663000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:55:11.598930    6063 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:55:11.606295    6063 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 16:55:11.657810    6063 start.go:159] libmachine.API.Create for "old-k8s-version-663000" (driver="qemu2")
	I1001 16:55:11.657859    6063 client.go:168] LocalClient.Create starting
	I1001 16:55:11.657988    6063 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:55:11.658060    6063 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:11.658080    6063 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:11.658142    6063 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:55:11.658193    6063 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:11.658215    6063 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:11.658725    6063 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:55:11.831575    6063 main.go:141] libmachine: Creating SSH key...
	I1001 16:55:11.917220    6063 main.go:141] libmachine: Creating Disk image...
	I1001 16:55:11.917227    6063 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:55:11.917474    6063 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/disk.qcow2
	I1001 16:55:11.926903    6063 main.go:141] libmachine: STDOUT: 
	I1001 16:55:11.926923    6063 main.go:141] libmachine: STDERR: 
	I1001 16:55:11.926980    6063 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/disk.qcow2 +20000M
	I1001 16:55:11.935286    6063 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:55:11.935382    6063 main.go:141] libmachine: STDERR: 
	I1001 16:55:11.935395    6063 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/disk.qcow2
	I1001 16:55:11.935401    6063 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:55:11.935409    6063 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:11.935434    6063 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:35:59:70:ff:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/disk.qcow2
	I1001 16:55:11.937108    6063 main.go:141] libmachine: STDOUT: 
	I1001 16:55:11.937122    6063 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:11.937146    6063 client.go:171] duration metric: took 279.282583ms to LocalClient.Create
	I1001 16:55:13.939298    6063 start.go:128] duration metric: took 2.340378291s to createHost
	I1001 16:55:13.939344    6063 start.go:83] releasing machines lock for "old-k8s-version-663000", held for 2.340843417s
	W1001 16:55:13.939683    6063 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-663000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-663000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:13.958359    6063 out.go:201] 
	W1001 16:55:13.962267    6063 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:55:13.962295    6063 out.go:270] * 
	* 
	W1001 16:55:13.965162    6063 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:55:13.976272    6063 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-663000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000: exit status 7 (70.235834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-708000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-708000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.047589s)

                                                
                                                
-- stdout --
	* [no-preload-708000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-708000" primary control-plane node in "no-preload-708000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-708000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:55:06.401844    6168 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:55:06.401977    6168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:06.401979    6168 out.go:358] Setting ErrFile to fd 2...
	I1001 16:55:06.401989    6168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:06.402119    6168 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:55:06.403217    6168 out.go:352] Setting JSON to false
	I1001 16:55:06.419363    6168 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5074,"bootTime":1727821832,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:55:06.419437    6168 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:55:06.425979    6168 out.go:177] * [no-preload-708000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:55:06.432851    6168 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:55:06.432897    6168 notify.go:220] Checking for updates...
	I1001 16:55:06.438787    6168 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:55:06.441788    6168 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:55:06.445761    6168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:55:06.448790    6168 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:55:06.451803    6168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:55:06.455152    6168 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:55:06.455234    6168 config.go:182] Loaded profile config "old-k8s-version-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1001 16:55:06.455284    6168 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:55:06.459756    6168 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:55:06.466763    6168 start.go:297] selected driver: qemu2
	I1001 16:55:06.466768    6168 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:55:06.466775    6168 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:55:06.469063    6168 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:55:06.471763    6168 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:55:06.474907    6168 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:55:06.474931    6168 cni.go:84] Creating CNI manager for ""
	I1001 16:55:06.474953    6168 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:55:06.474958    6168 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 16:55:06.474989    6168 start.go:340] cluster config:
	{Name:no-preload-708000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-708000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:55:06.478650    6168 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:06.486780    6168 out.go:177] * Starting "no-preload-708000" primary control-plane node in "no-preload-708000" cluster
	I1001 16:55:06.489743    6168 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:55:06.489850    6168 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/no-preload-708000/config.json ...
	I1001 16:55:06.489869    6168 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/no-preload-708000/config.json: {Name:mk826217aa3a7bc69e14c502a8b1ccfe4a852c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:55:06.489870    6168 cache.go:107] acquiring lock: {Name:mk04d0efd994fa5cbd61ff37798e20026905d950 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:06.489902    6168 cache.go:107] acquiring lock: {Name:mk3d2db9881c3f99d7f96a5c119ded40639f07a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:06.489962    6168 cache.go:115] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1001 16:55:06.489969    6168 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.208µs
	I1001 16:55:06.489976    6168 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1001 16:55:06.489984    6168 cache.go:107] acquiring lock: {Name:mk0fd2efae3f671de93bb476544e060c6d6ddd62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:06.489868    6168 cache.go:107] acquiring lock: {Name:mkccf526f56bae555098745e5050083249a8b654 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:06.490036    6168 cache.go:107] acquiring lock: {Name:mk4dbdefa2b4eeaf9599cb97d17a3078038d79c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:06.490046    6168 cache.go:107] acquiring lock: {Name:mk6d5a425cd5d9689eebd57086bd62ebe7a32d82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:06.489927    6168 cache.go:107] acquiring lock: {Name:mk6b1c72251caeb834b5f051f04e6c0fea1b53e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:06.490078    6168 cache.go:107] acquiring lock: {Name:mk300718fd75ea01fc8f43fe97958036ecb869da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:06.490090    6168 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1001 16:55:06.490179    6168 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1001 16:55:06.490243    6168 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1001 16:55:06.490278    6168 start.go:360] acquireMachinesLock for no-preload-708000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:06.490284    6168 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1001 16:55:06.490312    6168 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1001 16:55:06.490454    6168 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1001 16:55:06.490486    6168 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1001 16:55:06.496281    6168 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1001 16:55:06.496302    6168 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1001 16:55:06.496388    6168 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1001 16:55:06.496428    6168 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1001 16:55:06.496482    6168 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1001 16:55:06.496531    6168 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1001 16:55:06.496576    6168 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1001 16:55:06.571702    6168 start.go:364] duration metric: took 81.410708ms to acquireMachinesLock for "no-preload-708000"
	I1001 16:55:06.571738    6168 start.go:93] Provisioning new machine with config: &{Name:no-preload-708000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:no-preload-708000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:55:06.571779    6168 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:55:06.577763    6168 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 16:55:06.594870    6168 start.go:159] libmachine.API.Create for "no-preload-708000" (driver="qemu2")
	I1001 16:55:06.594896    6168 client.go:168] LocalClient.Create starting
	I1001 16:55:06.595007    6168 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:55:06.595040    6168 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:06.595051    6168 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:06.595090    6168 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:55:06.595116    6168 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:06.595124    6168 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:06.595527    6168 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:55:06.769620    6168 main.go:141] libmachine: Creating SSH key...
	I1001 16:55:06.908084    6168 main.go:141] libmachine: Creating Disk image...
	I1001 16:55:06.908095    6168 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:55:06.908360    6168 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/disk.qcow2
	I1001 16:55:06.917831    6168 main.go:141] libmachine: STDOUT: 
	I1001 16:55:06.917846    6168 main.go:141] libmachine: STDERR: 
	I1001 16:55:06.917904    6168 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/disk.qcow2 +20000M
	I1001 16:55:06.925828    6168 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:55:06.925841    6168 main.go:141] libmachine: STDERR: 
	I1001 16:55:06.925859    6168 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/disk.qcow2
	I1001 16:55:06.925863    6168 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:55:06.925877    6168 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:06.925903    6168 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:45:f3:bf:e2:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/disk.qcow2
	I1001 16:55:06.927560    6168 main.go:141] libmachine: STDOUT: 
	I1001 16:55:06.927572    6168 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:06.927589    6168 client.go:171] duration metric: took 332.693791ms to LocalClient.Create
	I1001 16:55:08.408345    6168 cache.go:162] opening:  /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1001 16:55:08.567167    6168 cache.go:162] opening:  /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1001 16:55:08.586542    6168 cache.go:162] opening:  /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1001 16:55:08.588653    6168 cache.go:162] opening:  /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I1001 16:55:08.682071    6168 cache.go:157] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1001 16:55:08.682135    6168 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 2.192251958s
	I1001 16:55:08.682163    6168 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1001 16:55:08.927824    6168 start.go:128] duration metric: took 2.3560645s to createHost
	I1001 16:55:08.927870    6168 start.go:83] releasing machines lock for "no-preload-708000", held for 2.356180666s
	W1001 16:55:08.927925    6168 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:08.946794    6168 out.go:177] * Deleting "no-preload-708000" in qemu2 ...
	W1001 16:55:08.988323    6168 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:08.988351    6168 start.go:729] Will try again in 5 seconds ...
	I1001 16:55:09.092518    6168 cache.go:162] opening:  /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1001 16:55:09.095279    6168 cache.go:162] opening:  /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I1001 16:55:09.122014    6168 cache.go:162] opening:  /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I1001 16:55:10.270151    6168 cache.go:157] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1001 16:55:10.270229    6168 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.780209417s
	I1001 16:55:10.270266    6168 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1001 16:55:11.399098    6168 cache.go:157] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1001 16:55:11.399144    6168 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 4.909205459s
	I1001 16:55:11.399172    6168 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1001 16:55:12.611428    6168 cache.go:157] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1001 16:55:12.611493    6168 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 6.12172775s
	I1001 16:55:12.611523    6168 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1001 16:55:13.057920    6168 cache.go:157] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1001 16:55:13.057968    6168 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 6.568091708s
	I1001 16:55:13.057993    6168 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1001 16:55:13.127498    6168 cache.go:157] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1001 16:55:13.127537    6168 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 6.637763542s
	I1001 16:55:13.127563    6168 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1001 16:55:13.988799    6168 start.go:360] acquireMachinesLock for no-preload-708000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:13.989190    6168 start.go:364] duration metric: took 326.833µs to acquireMachinesLock for "no-preload-708000"
	I1001 16:55:13.989337    6168 start.go:93] Provisioning new machine with config: &{Name:no-preload-708000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:no-preload-708000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:55:13.989569    6168 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:55:13.998631    6168 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 16:55:14.046462    6168 start.go:159] libmachine.API.Create for "no-preload-708000" (driver="qemu2")
	I1001 16:55:14.046510    6168 client.go:168] LocalClient.Create starting
	I1001 16:55:14.046603    6168 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:55:14.046650    6168 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:14.046668    6168 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:14.046726    6168 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:55:14.046751    6168 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:14.046765    6168 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:14.047205    6168 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:55:14.278928    6168 main.go:141] libmachine: Creating SSH key...
	I1001 16:55:14.363703    6168 main.go:141] libmachine: Creating Disk image...
	I1001 16:55:14.363710    6168 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:55:14.363899    6168 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/disk.qcow2
	I1001 16:55:14.373061    6168 main.go:141] libmachine: STDOUT: 
	I1001 16:55:14.373086    6168 main.go:141] libmachine: STDERR: 
	I1001 16:55:14.373153    6168 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/disk.qcow2 +20000M
	I1001 16:55:14.381695    6168 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:55:14.381714    6168 main.go:141] libmachine: STDERR: 
	I1001 16:55:14.381733    6168 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/disk.qcow2
	I1001 16:55:14.381737    6168 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:55:14.381746    6168 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:14.381784    6168 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:cf:b3:0d:e0:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/disk.qcow2
	I1001 16:55:14.383704    6168 main.go:141] libmachine: STDOUT: 
	I1001 16:55:14.383718    6168 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:14.383733    6168 client.go:171] duration metric: took 337.223ms to LocalClient.Create
	I1001 16:55:16.384109    6168 start.go:128] duration metric: took 2.394523s to createHost
	I1001 16:55:16.384182    6168 start.go:83] releasing machines lock for "no-preload-708000", held for 2.395006458s
	W1001 16:55:16.384462    6168 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-708000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-708000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:16.394159    6168 out.go:201] 
	W1001 16:55:16.397090    6168 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:55:16.397121    6168 out.go:270] * 
	* 
	W1001 16:55:16.400474    6168 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:55:16.406068    6168 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-708000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000: exit status 7 (64.268042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-708000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-663000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-663000 create -f testdata/busybox.yaml: exit status 1 (32.3005ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-663000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-663000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000: exit status 7 (31.48925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-663000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000: exit status 7 (32.643292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-663000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-663000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-663000 describe deploy/metrics-server -n kube-system: exit status 1 (29.646083ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-663000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-663000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000: exit status 7 (31.052625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-708000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-708000 create -f testdata/busybox.yaml: exit status 1 (30.011875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-708000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-708000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000: exit status 7 (28.916625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-708000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000: exit status 7 (29.399833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-708000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-708000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-708000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-708000 describe deploy/metrics-server -n kube-system: exit status 1 (26.699209ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-708000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-708000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000: exit status 7 (29.200166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-708000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-663000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-663000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.192088416s)

                                                
                                                
-- stdout --
	* [old-k8s-version-663000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-663000" primary control-plane node in "old-k8s-version-663000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-663000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-663000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:55:17.738562    6268 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:55:17.738715    6268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:17.738719    6268 out.go:358] Setting ErrFile to fd 2...
	I1001 16:55:17.738721    6268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:17.738829    6268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:55:17.739834    6268 out.go:352] Setting JSON to false
	I1001 16:55:17.755911    6268 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5085,"bootTime":1727821832,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:55:17.755992    6268 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:55:17.760816    6268 out.go:177] * [old-k8s-version-663000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:55:17.767835    6268 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:55:17.767906    6268 notify.go:220] Checking for updates...
	I1001 16:55:17.776676    6268 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:55:17.779716    6268 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:55:17.782829    6268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:55:17.785754    6268 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:55:17.788721    6268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:55:17.792069    6268 config.go:182] Loaded profile config "old-k8s-version-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1001 16:55:17.795666    6268 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1001 16:55:17.798764    6268 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:55:17.802743    6268 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 16:55:17.809747    6268 start.go:297] selected driver: qemu2
	I1001 16:55:17.809753    6268 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-663000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-663000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:55:17.809812    6268 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:55:17.812283    6268 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:55:17.812310    6268 cni.go:84] Creating CNI manager for ""
	I1001 16:55:17.812342    6268 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1001 16:55:17.812368    6268 start.go:340] cluster config:
	{Name:old-k8s-version-663000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-663000 Namespace:defaul
t APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:55:17.816102    6268 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:17.823693    6268 out.go:177] * Starting "old-k8s-version-663000" primary control-plane node in "old-k8s-version-663000" cluster
	I1001 16:55:17.827773    6268 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 16:55:17.827788    6268 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1001 16:55:17.827797    6268 cache.go:56] Caching tarball of preloaded images
	I1001 16:55:17.827866    6268 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:55:17.827873    6268 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1001 16:55:17.827929    6268 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/old-k8s-version-663000/config.json ...
	I1001 16:55:17.828374    6268 start.go:360] acquireMachinesLock for old-k8s-version-663000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:17.828405    6268 start.go:364] duration metric: took 24.25µs to acquireMachinesLock for "old-k8s-version-663000"
	I1001 16:55:17.828414    6268 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:55:17.828418    6268 fix.go:54] fixHost starting: 
	I1001 16:55:17.828554    6268 fix.go:112] recreateIfNeeded on old-k8s-version-663000: state=Stopped err=<nil>
	W1001 16:55:17.828564    6268 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:55:17.832767    6268 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-663000" ...
	I1001 16:55:17.840680    6268 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:17.840713    6268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:35:59:70:ff:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/disk.qcow2
	I1001 16:55:17.842797    6268 main.go:141] libmachine: STDOUT: 
	I1001 16:55:17.842819    6268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:17.842849    6268 fix.go:56] duration metric: took 14.429375ms for fixHost
	I1001 16:55:17.842854    6268 start.go:83] releasing machines lock for "old-k8s-version-663000", held for 14.444333ms
	W1001 16:55:17.842860    6268 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:55:17.842892    6268 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:17.842897    6268 start.go:729] Will try again in 5 seconds ...
	I1001 16:55:22.844981    6268 start.go:360] acquireMachinesLock for old-k8s-version-663000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:22.845345    6268 start.go:364] duration metric: took 284.833µs to acquireMachinesLock for "old-k8s-version-663000"
	I1001 16:55:22.845453    6268 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:55:22.845474    6268 fix.go:54] fixHost starting: 
	I1001 16:55:22.846145    6268 fix.go:112] recreateIfNeeded on old-k8s-version-663000: state=Stopped err=<nil>
	W1001 16:55:22.846174    6268 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:55:22.850696    6268 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-663000" ...
	I1001 16:55:22.858537    6268 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:22.858744    6268 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:35:59:70:ff:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/old-k8s-version-663000/disk.qcow2
	I1001 16:55:22.868091    6268 main.go:141] libmachine: STDOUT: 
	I1001 16:55:22.868147    6268 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:22.868227    6268 fix.go:56] duration metric: took 22.757167ms for fixHost
	I1001 16:55:22.868243    6268 start.go:83] releasing machines lock for "old-k8s-version-663000", held for 22.87725ms
	W1001 16:55:22.868459    6268 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-663000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-663000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:22.876563    6268 out.go:201] 
	W1001 16:55:22.879674    6268 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:55:22.879729    6268 out.go:270] * 
	* 
	W1001 16:55:22.882336    6268 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:55:22.890541    6268 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-663000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000: exit status 7 (66.767209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-708000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-708000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.550350208s)

                                                
                                                
-- stdout --
	* [no-preload-708000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-708000" primary control-plane node in "no-preload-708000" cluster
	* Restarting existing qemu2 VM for "no-preload-708000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-708000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:55:20.499148    6291 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:55:20.499250    6291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:20.499253    6291 out.go:358] Setting ErrFile to fd 2...
	I1001 16:55:20.499255    6291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:20.499370    6291 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:55:20.500335    6291 out.go:352] Setting JSON to false
	I1001 16:55:20.516785    6291 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5088,"bootTime":1727821832,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:55:20.516861    6291 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:55:20.520470    6291 out.go:177] * [no-preload-708000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:55:20.527378    6291 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:55:20.527458    6291 notify.go:220] Checking for updates...
	I1001 16:55:20.534241    6291 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:55:20.537372    6291 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:55:20.540427    6291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:55:20.543433    6291 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:55:20.546429    6291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:55:20.549730    6291 config.go:182] Loaded profile config "no-preload-708000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:55:20.549990    6291 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:55:20.553486    6291 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 16:55:20.560413    6291 start.go:297] selected driver: qemu2
	I1001 16:55:20.560420    6291 start.go:901] validating driver "qemu2" against &{Name:no-preload-708000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:no-preload-708000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:55:20.560496    6291 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:55:20.562778    6291 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:55:20.562805    6291 cni.go:84] Creating CNI manager for ""
	I1001 16:55:20.562826    6291 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:55:20.562852    6291 start.go:340] cluster config:
	{Name:no-preload-708000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-708000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:55:20.566333    6291 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:20.574421    6291 out.go:177] * Starting "no-preload-708000" primary control-plane node in "no-preload-708000" cluster
	I1001 16:55:20.578430    6291 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:55:20.578517    6291 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/no-preload-708000/config.json ...
	I1001 16:55:20.578559    6291 cache.go:107] acquiring lock: {Name:mk04d0efd994fa5cbd61ff37798e20026905d950 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:20.578571    6291 cache.go:107] acquiring lock: {Name:mk3d2db9881c3f99d7f96a5c119ded40639f07a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:20.578642    6291 cache.go:107] acquiring lock: {Name:mk6b1c72251caeb834b5f051f04e6c0fea1b53e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:20.578657    6291 cache.go:107] acquiring lock: {Name:mk0fd2efae3f671de93bb476544e060c6d6ddd62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:20.578672    6291 cache.go:107] acquiring lock: {Name:mk300718fd75ea01fc8f43fe97958036ecb869da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:20.578678    6291 cache.go:107] acquiring lock: {Name:mkccf526f56bae555098745e5050083249a8b654 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:20.578688    6291 cache.go:107] acquiring lock: {Name:mk4dbdefa2b4eeaf9599cb97d17a3078038d79c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:20.578715    6291 cache.go:107] acquiring lock: {Name:mk6d5a425cd5d9689eebd57086bd62ebe7a32d82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:20.578758    6291 cache.go:115] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1001 16:55:20.578761    6291 cache.go:115] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1001 16:55:20.578773    6291 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 125µs
	I1001 16:55:20.578784    6291 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1001 16:55:20.578737    6291 cache.go:115] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1001 16:55:20.578788    6291 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 235.416µs
	I1001 16:55:20.578792    6291 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 240.792µs
	I1001 16:55:20.578796    6291 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1001 16:55:20.578796    6291 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1001 16:55:20.578797    6291 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1001 16:55:20.578815    6291 cache.go:115] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1001 16:55:20.578821    6291 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 153µs
	I1001 16:55:20.578825    6291 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1001 16:55:20.578831    6291 cache.go:115] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1001 16:55:20.578836    6291 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 235.333µs
	I1001 16:55:20.578839    6291 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1001 16:55:20.578892    6291 cache.go:115] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1001 16:55:20.578899    6291 cache.go:115] /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1001 16:55:20.578898    6291 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 226.833µs
	I1001 16:55:20.578904    6291 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 337.791µs
	I1001 16:55:20.578908    6291 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1001 16:55:20.578905    6291 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1001 16:55:20.578992    6291 start.go:360] acquireMachinesLock for no-preload-708000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:20.579022    6291 start.go:364] duration metric: took 23.5µs to acquireMachinesLock for "no-preload-708000"
	I1001 16:55:20.579031    6291 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:55:20.579035    6291 fix.go:54] fixHost starting: 
	I1001 16:55:20.579152    6291 fix.go:112] recreateIfNeeded on no-preload-708000: state=Stopped err=<nil>
	W1001 16:55:20.579160    6291 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:55:20.587400    6291 out.go:177] * Restarting existing qemu2 VM for "no-preload-708000" ...
	I1001 16:55:20.591385    6291 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:20.591420    6291 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:cf:b3:0d:e0:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/disk.qcow2
	I1001 16:55:20.591946    6291 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1001 16:55:20.593507    6291 main.go:141] libmachine: STDOUT: 
	I1001 16:55:20.593533    6291 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:20.593576    6291 fix.go:56] duration metric: took 14.539417ms for fixHost
	I1001 16:55:20.593582    6291 start.go:83] releasing machines lock for "no-preload-708000", held for 14.55575ms
	W1001 16:55:20.593590    6291 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:55:20.593627    6291 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:20.593632    6291 start.go:729] Will try again in 5 seconds ...
	I1001 16:55:22.502443    6291 cache.go:162] opening:  /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1001 16:55:25.593712    6291 start.go:360] acquireMachinesLock for no-preload-708000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:25.938676    6291 start.go:364] duration metric: took 344.853208ms to acquireMachinesLock for "no-preload-708000"
	I1001 16:55:25.938793    6291 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:55:25.938813    6291 fix.go:54] fixHost starting: 
	I1001 16:55:25.939465    6291 fix.go:112] recreateIfNeeded on no-preload-708000: state=Stopped err=<nil>
	W1001 16:55:25.939493    6291 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:55:25.949026    6291 out.go:177] * Restarting existing qemu2 VM for "no-preload-708000" ...
	I1001 16:55:25.966917    6291 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:25.967128    6291 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:cf:b3:0d:e0:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/no-preload-708000/disk.qcow2
	I1001 16:55:25.977796    6291 main.go:141] libmachine: STDOUT: 
	I1001 16:55:25.977852    6291 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:25.977925    6291 fix.go:56] duration metric: took 39.1125ms for fixHost
	I1001 16:55:25.977948    6291 start.go:83] releasing machines lock for "no-preload-708000", held for 39.240333ms
	W1001 16:55:25.978182    6291 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-708000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-708000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:25.986963    6291 out.go:201] 
	W1001 16:55:25.991058    6291 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:55:25.991104    6291 out.go:270] * 
	* 
	W1001 16:55:25.992777    6291 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:55:26.007074    6291 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-708000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000: exit status 7 (55.778916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-708000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-663000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000: exit status 7 (32.003209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-663000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-663000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-663000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.584ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-663000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-663000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000: exit status 7 (28.650125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-663000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000: exit status 7 (29.320584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-663000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-663000 --alsologtostderr -v=1: exit status 83 (41.076875ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-663000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-663000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:55:23.157612    6314 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:55:23.158000    6314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:23.158003    6314 out.go:358] Setting ErrFile to fd 2...
	I1001 16:55:23.158006    6314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:23.158205    6314 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:55:23.158413    6314 out.go:352] Setting JSON to false
	I1001 16:55:23.158421    6314 mustload.go:65] Loading cluster: old-k8s-version-663000
	I1001 16:55:23.158629    6314 config.go:182] Loaded profile config "old-k8s-version-663000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1001 16:55:23.161709    6314 out.go:177] * The control-plane node old-k8s-version-663000 host is not running: state=Stopped
	I1001 16:55:23.165692    6314 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-663000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-663000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000: exit status 7 (29.029584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-663000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000: exit status 7 (28.928167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-663000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-591000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-591000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.964342083s)

                                                
                                                
-- stdout --
	* [embed-certs-591000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-591000" primary control-plane node in "embed-certs-591000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-591000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:55:23.481200    6331 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:55:23.481344    6331 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:23.481348    6331 out.go:358] Setting ErrFile to fd 2...
	I1001 16:55:23.481351    6331 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:23.481479    6331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:55:23.482570    6331 out.go:352] Setting JSON to false
	I1001 16:55:23.498580    6331 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5091,"bootTime":1727821832,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:55:23.498649    6331 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:55:23.503721    6331 out.go:177] * [embed-certs-591000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:55:23.509724    6331 notify.go:220] Checking for updates...
	I1001 16:55:23.513677    6331 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:55:23.516655    6331 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:55:23.519683    6331 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:55:23.526529    6331 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:55:23.534693    6331 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:55:23.536083    6331 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:55:23.538972    6331 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:55:23.539036    6331 config.go:182] Loaded profile config "no-preload-708000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:55:23.539083    6331 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:55:23.543670    6331 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:55:23.548633    6331 start.go:297] selected driver: qemu2
	I1001 16:55:23.548640    6331 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:55:23.548645    6331 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:55:23.550761    6331 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:55:23.554681    6331 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:55:23.557710    6331 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:55:23.557725    6331 cni.go:84] Creating CNI manager for ""
	I1001 16:55:23.557744    6331 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:55:23.557750    6331 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 16:55:23.557783    6331 start.go:340] cluster config:
	{Name:embed-certs-591000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-591000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:55:23.561324    6331 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:23.568672    6331 out.go:177] * Starting "embed-certs-591000" primary control-plane node in "embed-certs-591000" cluster
	I1001 16:55:23.572651    6331 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:55:23.572672    6331 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:55:23.572679    6331 cache.go:56] Caching tarball of preloaded images
	I1001 16:55:23.572737    6331 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:55:23.572743    6331 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:55:23.572806    6331 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/embed-certs-591000/config.json ...
	I1001 16:55:23.572818    6331 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/embed-certs-591000/config.json: {Name:mk22ea03462a22cf07fd1721bcf9d777d90f1719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:55:23.573066    6331 start.go:360] acquireMachinesLock for embed-certs-591000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:23.573101    6331 start.go:364] duration metric: took 28.958µs to acquireMachinesLock for "embed-certs-591000"
	I1001 16:55:23.573114    6331 start.go:93] Provisioning new machine with config: &{Name:embed-certs-591000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:embed-certs-591000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:55:23.573159    6331 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:55:23.577698    6331 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 16:55:23.595611    6331 start.go:159] libmachine.API.Create for "embed-certs-591000" (driver="qemu2")
	I1001 16:55:23.595635    6331 client.go:168] LocalClient.Create starting
	I1001 16:55:23.595697    6331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:55:23.595731    6331 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:23.595739    6331 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:23.595784    6331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:55:23.595809    6331 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:23.595820    6331 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:23.596168    6331 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:55:23.762359    6331 main.go:141] libmachine: Creating SSH key...
	I1001 16:55:23.918792    6331 main.go:141] libmachine: Creating Disk image...
	I1001 16:55:23.918799    6331 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:55:23.919033    6331 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/disk.qcow2
	I1001 16:55:23.928413    6331 main.go:141] libmachine: STDOUT: 
	I1001 16:55:23.928439    6331 main.go:141] libmachine: STDERR: 
	I1001 16:55:23.928499    6331 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/disk.qcow2 +20000M
	I1001 16:55:23.936470    6331 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:55:23.936484    6331 main.go:141] libmachine: STDERR: 
	I1001 16:55:23.936508    6331 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/disk.qcow2
	I1001 16:55:23.936513    6331 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:55:23.936523    6331 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:23.936556    6331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:94:09:da:75:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/disk.qcow2
	I1001 16:55:23.938158    6331 main.go:141] libmachine: STDOUT: 
	I1001 16:55:23.938171    6331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:23.938192    6331 client.go:171] duration metric: took 342.556ms to LocalClient.Create
	I1001 16:55:25.938493    6331 start.go:128] duration metric: took 2.365338834s to createHost
	I1001 16:55:25.938564    6331 start.go:83] releasing machines lock for "embed-certs-591000", held for 2.365489709s
	W1001 16:55:25.938621    6331 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:25.963053    6331 out.go:177] * Deleting "embed-certs-591000" in qemu2 ...
	W1001 16:55:26.023045    6331 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:26.023067    6331 start.go:729] Will try again in 5 seconds ...
	I1001 16:55:31.025287    6331 start.go:360] acquireMachinesLock for embed-certs-591000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:31.025798    6331 start.go:364] duration metric: took 365µs to acquireMachinesLock for "embed-certs-591000"
	I1001 16:55:31.025917    6331 start.go:93] Provisioning new machine with config: &{Name:embed-certs-591000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:embed-certs-591000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:55:31.026163    6331 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:55:31.033793    6331 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 16:55:31.085555    6331 start.go:159] libmachine.API.Create for "embed-certs-591000" (driver="qemu2")
	I1001 16:55:31.085612    6331 client.go:168] LocalClient.Create starting
	I1001 16:55:31.085734    6331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:55:31.085804    6331 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:31.085822    6331 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:31.085887    6331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:55:31.085931    6331 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:31.085947    6331 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:31.086470    6331 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:55:31.270173    6331 main.go:141] libmachine: Creating SSH key...
	I1001 16:55:31.354635    6331 main.go:141] libmachine: Creating Disk image...
	I1001 16:55:31.354641    6331 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:55:31.354877    6331 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/disk.qcow2
	I1001 16:55:31.363928    6331 main.go:141] libmachine: STDOUT: 
	I1001 16:55:31.363949    6331 main.go:141] libmachine: STDERR: 
	I1001 16:55:31.364001    6331 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/disk.qcow2 +20000M
	I1001 16:55:31.371723    6331 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:55:31.371739    6331 main.go:141] libmachine: STDERR: 
	I1001 16:55:31.371752    6331 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/disk.qcow2
	I1001 16:55:31.371755    6331 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:55:31.371763    6331 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:31.371788    6331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:92:06:d0:71:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/disk.qcow2
	I1001 16:55:31.373356    6331 main.go:141] libmachine: STDOUT: 
	I1001 16:55:31.373371    6331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:31.373384    6331 client.go:171] duration metric: took 287.7705ms to LocalClient.Create
	I1001 16:55:33.375595    6331 start.go:128] duration metric: took 2.349439417s to createHost
	I1001 16:55:33.375725    6331 start.go:83] releasing machines lock for "embed-certs-591000", held for 2.349938667s
	W1001 16:55:33.376046    6331 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-591000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-591000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:33.384607    6331 out.go:201] 
	W1001 16:55:33.391746    6331 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:55:33.391771    6331 out.go:270] * 
	* 
	W1001 16:55:33.394252    6331 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:55:33.404707    6331 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-591000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000: exit status 7 (64.909417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-591000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-708000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000: exit status 7 (31.25225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-708000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-708000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-708000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-708000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.2205ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-708000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-708000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000: exit status 7 (29.243792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-708000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-708000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000: exit status 7 (29.856958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-708000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-708000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-708000 --alsologtostderr -v=1: exit status 83 (44.659709ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-708000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-708000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:55:26.264989    6353 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:55:26.265149    6353 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:26.265153    6353 out.go:358] Setting ErrFile to fd 2...
	I1001 16:55:26.265155    6353 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:26.265294    6353 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:55:26.265516    6353 out.go:352] Setting JSON to false
	I1001 16:55:26.265524    6353 mustload.go:65] Loading cluster: no-preload-708000
	I1001 16:55:26.265745    6353 config.go:182] Loaded profile config "no-preload-708000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:55:26.269429    6353 out.go:177] * The control-plane node no-preload-708000 host is not running: state=Stopped
	I1001 16:55:26.276572    6353 out.go:177]   To start a cluster, run: "minikube start -p no-preload-708000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-708000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000: exit status 7 (29.234375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-708000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000: exit status 7 (29.545792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-708000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-311000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-311000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.7747915s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-311000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-311000" primary control-plane node in "default-k8s-diff-port-311000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-311000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:55:26.699945    6377 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:55:26.700070    6377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:26.700073    6377 out.go:358] Setting ErrFile to fd 2...
	I1001 16:55:26.700075    6377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:26.700207    6377 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:55:26.701296    6377 out.go:352] Setting JSON to false
	I1001 16:55:26.717374    6377 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5094,"bootTime":1727821832,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:55:26.717443    6377 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:55:26.722490    6377 out.go:177] * [default-k8s-diff-port-311000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:55:26.729453    6377 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:55:26.729500    6377 notify.go:220] Checking for updates...
	I1001 16:55:26.736369    6377 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:55:26.740454    6377 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:55:26.743424    6377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:55:26.746405    6377 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:55:26.749562    6377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:55:26.752719    6377 config.go:182] Loaded profile config "embed-certs-591000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:55:26.752778    6377 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:55:26.752816    6377 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:55:26.757406    6377 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:55:26.763377    6377 start.go:297] selected driver: qemu2
	I1001 16:55:26.763383    6377 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:55:26.763390    6377 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:55:26.765654    6377 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 16:55:26.769455    6377 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:55:26.772554    6377 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:55:26.772573    6377 cni.go:84] Creating CNI manager for ""
	I1001 16:55:26.772608    6377 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:55:26.772618    6377 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 16:55:26.772642    6377 start.go:340] cluster config:
	{Name:default-k8s-diff-port-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:55:26.776522    6377 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:26.781415    6377 out.go:177] * Starting "default-k8s-diff-port-311000" primary control-plane node in "default-k8s-diff-port-311000" cluster
	I1001 16:55:26.789421    6377 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:55:26.789437    6377 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:55:26.789447    6377 cache.go:56] Caching tarball of preloaded images
	I1001 16:55:26.789515    6377 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:55:26.789523    6377 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:55:26.789586    6377 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/default-k8s-diff-port-311000/config.json ...
	I1001 16:55:26.789598    6377 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/default-k8s-diff-port-311000/config.json: {Name:mkbba0172e942b2034300fb95d63ee39f3a5299b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:55:26.789825    6377 start.go:360] acquireMachinesLock for default-k8s-diff-port-311000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:26.789863    6377 start.go:364] duration metric: took 29.417µs to acquireMachinesLock for "default-k8s-diff-port-311000"
	I1001 16:55:26.789877    6377 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:55:26.789909    6377 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:55:26.797425    6377 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 16:55:26.816668    6377 start.go:159] libmachine.API.Create for "default-k8s-diff-port-311000" (driver="qemu2")
	I1001 16:55:26.816708    6377 client.go:168] LocalClient.Create starting
	I1001 16:55:26.816784    6377 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:55:26.816819    6377 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:26.816829    6377 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:26.816870    6377 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:55:26.816895    6377 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:26.816900    6377 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:26.817280    6377 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:55:26.979743    6377 main.go:141] libmachine: Creating SSH key...
	I1001 16:55:27.038140    6377 main.go:141] libmachine: Creating Disk image...
	I1001 16:55:27.038152    6377 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:55:27.038395    6377 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/disk.qcow2
	I1001 16:55:27.047460    6377 main.go:141] libmachine: STDOUT: 
	I1001 16:55:27.047477    6377 main.go:141] libmachine: STDERR: 
	I1001 16:55:27.047557    6377 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/disk.qcow2 +20000M
	I1001 16:55:27.055249    6377 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:55:27.055274    6377 main.go:141] libmachine: STDERR: 
	I1001 16:55:27.055288    6377 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/disk.qcow2
	I1001 16:55:27.055293    6377 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:55:27.055316    6377 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:27.055339    6377 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:21:fd:9d:13:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/disk.qcow2
	I1001 16:55:27.056937    6377 main.go:141] libmachine: STDOUT: 
	I1001 16:55:27.056951    6377 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:27.056978    6377 client.go:171] duration metric: took 240.267584ms to LocalClient.Create
	I1001 16:55:29.059188    6377 start.go:128] duration metric: took 2.269286625s to createHost
	I1001 16:55:29.059251    6377 start.go:83] releasing machines lock for "default-k8s-diff-port-311000", held for 2.269410084s
	W1001 16:55:29.059303    6377 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:29.070192    6377 out.go:177] * Deleting "default-k8s-diff-port-311000" in qemu2 ...
	W1001 16:55:29.110514    6377 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:29.110540    6377 start.go:729] Will try again in 5 seconds ...
	I1001 16:55:34.112634    6377 start.go:360] acquireMachinesLock for default-k8s-diff-port-311000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:34.113039    6377 start.go:364] duration metric: took 325.417µs to acquireMachinesLock for "default-k8s-diff-port-311000"
	I1001 16:55:34.113190    6377 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:55:34.113542    6377 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:55:34.119261    6377 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 16:55:34.169876    6377 start.go:159] libmachine.API.Create for "default-k8s-diff-port-311000" (driver="qemu2")
	I1001 16:55:34.169957    6377 client.go:168] LocalClient.Create starting
	I1001 16:55:34.170100    6377 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:55:34.170152    6377 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:34.170176    6377 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:34.170251    6377 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:55:34.170281    6377 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:34.170293    6377 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:34.170960    6377 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:55:34.349861    6377 main.go:141] libmachine: Creating SSH key...
	I1001 16:55:34.384663    6377 main.go:141] libmachine: Creating Disk image...
	I1001 16:55:34.384668    6377 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:55:34.384871    6377 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/disk.qcow2
	I1001 16:55:34.394200    6377 main.go:141] libmachine: STDOUT: 
	I1001 16:55:34.394220    6377 main.go:141] libmachine: STDERR: 
	I1001 16:55:34.394279    6377 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/disk.qcow2 +20000M
	I1001 16:55:34.402034    6377 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:55:34.402049    6377 main.go:141] libmachine: STDERR: 
	I1001 16:55:34.402071    6377 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/disk.qcow2
	I1001 16:55:34.402079    6377 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:55:34.402087    6377 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:34.402124    6377 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:3f:38:39:27:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/disk.qcow2
	I1001 16:55:34.403764    6377 main.go:141] libmachine: STDOUT: 
	I1001 16:55:34.403777    6377 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:34.403791    6377 client.go:171] duration metric: took 233.818125ms to LocalClient.Create
	I1001 16:55:36.405946    6377 start.go:128] duration metric: took 2.292411834s to createHost
	I1001 16:55:36.406017    6377 start.go:83] releasing machines lock for "default-k8s-diff-port-311000", held for 2.292990584s
	W1001 16:55:36.406547    6377 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:36.418303    6377 out.go:201] 
	W1001 16:55:36.422304    6377 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:55:36.422332    6377 out.go:270] * 
	* 
	W1001 16:55:36.425239    6377 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:55:36.434248    6377 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-311000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000: exit status 7 (63.893709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-311000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-591000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-591000 create -f testdata/busybox.yaml: exit status 1 (30.002083ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-591000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-591000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000: exit status 7 (29.065375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-591000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000: exit status 7 (28.809292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-591000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-591000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-591000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-591000 describe deploy/metrics-server -n kube-system: exit status 1 (26.641416ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-591000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-591000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000: exit status 7 (29.535542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-591000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-311000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-311000 create -f testdata/busybox.yaml: exit status 1 (30.272084ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-311000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-311000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000: exit status 7 (29.219208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-311000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000: exit status 7 (29.259917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-311000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-311000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-311000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-311000 describe deploy/metrics-server -n kube-system: exit status 1 (26.453209ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-311000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-311000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000: exit status 7 (28.847458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-311000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-591000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-591000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.180580417s)

                                                
                                                
-- stdout --
	* [embed-certs-591000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-591000" primary control-plane node in "embed-certs-591000" cluster
	* Restarting existing qemu2 VM for "embed-certs-591000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-591000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:55:37.434864    6447 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:55:37.435014    6447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:37.435017    6447 out.go:358] Setting ErrFile to fd 2...
	I1001 16:55:37.435019    6447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:37.435163    6447 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:55:37.436113    6447 out.go:352] Setting JSON to false
	I1001 16:55:37.452033    6447 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5105,"bootTime":1727821832,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:55:37.452107    6447 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:55:37.457360    6447 out.go:177] * [embed-certs-591000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:55:37.464488    6447 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:55:37.464569    6447 notify.go:220] Checking for updates...
	I1001 16:55:37.470391    6447 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:55:37.473453    6447 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:55:37.474876    6447 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:55:37.477412    6447 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:55:37.480438    6447 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:55:37.483759    6447 config.go:182] Loaded profile config "embed-certs-591000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:55:37.484041    6447 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:55:37.487387    6447 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 16:55:37.494456    6447 start.go:297] selected driver: qemu2
	I1001 16:55:37.494463    6447 start.go:901] validating driver "qemu2" against &{Name:embed-certs-591000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:embed-certs-591000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:55:37.494522    6447 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:55:37.496711    6447 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:55:37.496738    6447 cni.go:84] Creating CNI manager for ""
	I1001 16:55:37.496758    6447 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:55:37.496786    6447 start.go:340] cluster config:
	{Name:embed-certs-591000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-591000 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:55:37.500291    6447 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:37.508457    6447 out.go:177] * Starting "embed-certs-591000" primary control-plane node in "embed-certs-591000" cluster
	I1001 16:55:37.512404    6447 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:55:37.512421    6447 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:55:37.512430    6447 cache.go:56] Caching tarball of preloaded images
	I1001 16:55:37.512499    6447 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:55:37.512505    6447 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:55:37.512578    6447 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/embed-certs-591000/config.json ...
	I1001 16:55:37.513044    6447 start.go:360] acquireMachinesLock for embed-certs-591000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:37.513073    6447 start.go:364] duration metric: took 22.916µs to acquireMachinesLock for "embed-certs-591000"
	I1001 16:55:37.513082    6447 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:55:37.513087    6447 fix.go:54] fixHost starting: 
	I1001 16:55:37.513209    6447 fix.go:112] recreateIfNeeded on embed-certs-591000: state=Stopped err=<nil>
	W1001 16:55:37.513217    6447 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:55:37.516515    6447 out.go:177] * Restarting existing qemu2 VM for "embed-certs-591000" ...
	I1001 16:55:37.524431    6447 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:37.524475    6447 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:92:06:d0:71:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/disk.qcow2
	I1001 16:55:37.526406    6447 main.go:141] libmachine: STDOUT: 
	I1001 16:55:37.526425    6447 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:37.526459    6447 fix.go:56] duration metric: took 13.372166ms for fixHost
	I1001 16:55:37.526465    6447 start.go:83] releasing machines lock for "embed-certs-591000", held for 13.386958ms
	W1001 16:55:37.526472    6447 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:55:37.526502    6447 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:37.526507    6447 start.go:729] Will try again in 5 seconds ...
	I1001 16:55:42.528564    6447 start.go:360] acquireMachinesLock for embed-certs-591000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:42.528904    6447 start.go:364] duration metric: took 271.459µs to acquireMachinesLock for "embed-certs-591000"
	I1001 16:55:42.529028    6447 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:55:42.529048    6447 fix.go:54] fixHost starting: 
	I1001 16:55:42.529749    6447 fix.go:112] recreateIfNeeded on embed-certs-591000: state=Stopped err=<nil>
	W1001 16:55:42.529782    6447 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:55:42.539280    6447 out.go:177] * Restarting existing qemu2 VM for "embed-certs-591000" ...
	I1001 16:55:42.542381    6447 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:42.542542    6447 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:92:06:d0:71:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/embed-certs-591000/disk.qcow2
	I1001 16:55:42.551873    6447 main.go:141] libmachine: STDOUT: 
	I1001 16:55:42.551966    6447 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:42.552072    6447 fix.go:56] duration metric: took 23.019875ms for fixHost
	I1001 16:55:42.552095    6447 start.go:83] releasing machines lock for "embed-certs-591000", held for 23.170583ms
	W1001 16:55:42.552337    6447 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-591000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-591000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:42.559295    6447 out.go:201] 
	W1001 16:55:42.563220    6447 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:55:42.563252    6447 out.go:270] * 
	* 
	W1001 16:55:42.565783    6447 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:55:42.574285    6447 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-591000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000: exit status 7 (66.854083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-591000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-311000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-311000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.254076958s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-311000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-311000" primary control-plane node in "default-k8s-diff-port-311000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-311000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-311000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:55:40.369633    6470 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:55:40.369774    6470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:40.369777    6470 out.go:358] Setting ErrFile to fd 2...
	I1001 16:55:40.369779    6470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:40.369895    6470 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:55:40.370896    6470 out.go:352] Setting JSON to false
	I1001 16:55:40.387039    6470 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5108,"bootTime":1727821832,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:55:40.387094    6470 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:55:40.391474    6470 out.go:177] * [default-k8s-diff-port-311000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:55:40.399533    6470 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:55:40.399596    6470 notify.go:220] Checking for updates...
	I1001 16:55:40.407509    6470 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:55:40.408900    6470 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:55:40.411483    6470 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:55:40.414571    6470 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:55:40.417494    6470 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:55:40.420800    6470 config.go:182] Loaded profile config "default-k8s-diff-port-311000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:55:40.421064    6470 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:55:40.424559    6470 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 16:55:40.431492    6470 start.go:297] selected driver: qemu2
	I1001 16:55:40.431499    6470 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:55:40.431559    6470 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:55:40.433874    6470 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 16:55:40.433901    6470 cni.go:84] Creating CNI manager for ""
	I1001 16:55:40.433922    6470 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:55:40.433945    6470 start.go:340] cluster config:
	{Name:default-k8s-diff-port-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-311000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:55:40.437448    6470 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:40.446483    6470 out.go:177] * Starting "default-k8s-diff-port-311000" primary control-plane node in "default-k8s-diff-port-311000" cluster
	I1001 16:55:40.450498    6470 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:55:40.450514    6470 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:55:40.450534    6470 cache.go:56] Caching tarball of preloaded images
	I1001 16:55:40.450593    6470 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:55:40.450599    6470 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:55:40.450681    6470 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/default-k8s-diff-port-311000/config.json ...
	I1001 16:55:40.451126    6470 start.go:360] acquireMachinesLock for default-k8s-diff-port-311000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:40.451158    6470 start.go:364] duration metric: took 24.625µs to acquireMachinesLock for "default-k8s-diff-port-311000"
	I1001 16:55:40.451166    6470 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:55:40.451172    6470 fix.go:54] fixHost starting: 
	I1001 16:55:40.451302    6470 fix.go:112] recreateIfNeeded on default-k8s-diff-port-311000: state=Stopped err=<nil>
	W1001 16:55:40.451313    6470 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:55:40.455501    6470 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-311000" ...
	I1001 16:55:40.463484    6470 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:40.463522    6470 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:3f:38:39:27:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/disk.qcow2
	I1001 16:55:40.465505    6470 main.go:141] libmachine: STDOUT: 
	I1001 16:55:40.465531    6470 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:40.465561    6470 fix.go:56] duration metric: took 14.388959ms for fixHost
	I1001 16:55:40.465565    6470 start.go:83] releasing machines lock for "default-k8s-diff-port-311000", held for 14.403084ms
	W1001 16:55:40.465572    6470 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:55:40.465609    6470 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:40.465614    6470 start.go:729] Will try again in 5 seconds ...
	I1001 16:55:45.467635    6470 start.go:360] acquireMachinesLock for default-k8s-diff-port-311000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:45.527841    6470 start.go:364] duration metric: took 60.09425ms to acquireMachinesLock for "default-k8s-diff-port-311000"
	I1001 16:55:45.527932    6470 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:55:45.527958    6470 fix.go:54] fixHost starting: 
	I1001 16:55:45.528657    6470 fix.go:112] recreateIfNeeded on default-k8s-diff-port-311000: state=Stopped err=<nil>
	W1001 16:55:45.528686    6470 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:55:45.537159    6470 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-311000" ...
	I1001 16:55:45.554138    6470 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:45.554370    6470 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:3f:38:39:27:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/default-k8s-diff-port-311000/disk.qcow2
	I1001 16:55:45.564123    6470 main.go:141] libmachine: STDOUT: 
	I1001 16:55:45.564323    6470 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:45.564401    6470 fix.go:56] duration metric: took 36.448708ms for fixHost
	I1001 16:55:45.564415    6470 start.go:83] releasing machines lock for "default-k8s-diff-port-311000", held for 36.521834ms
	W1001 16:55:45.564574    6470 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-311000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-311000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:45.570080    6470 out.go:201] 
	W1001 16:55:45.573162    6470 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:55:45.573219    6470 out.go:270] * 
	* 
	W1001 16:55:45.574961    6470 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:55:45.584058    6470 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-311000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000: exit status 7 (60.784958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-311000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-591000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000: exit status 7 (31.920667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-591000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-591000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-591000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-591000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.030791ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-591000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-591000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000: exit status 7 (29.635125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-591000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-591000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000: exit status 7 (29.360917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-591000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-591000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-591000 --alsologtostderr -v=1: exit status 83 (41.242625ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-591000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-591000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:55:42.844165    6489 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:55:42.844331    6489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:42.844340    6489 out.go:358] Setting ErrFile to fd 2...
	I1001 16:55:42.844342    6489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:42.844472    6489 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:55:42.844701    6489 out.go:352] Setting JSON to false
	I1001 16:55:42.844712    6489 mustload.go:65] Loading cluster: embed-certs-591000
	I1001 16:55:42.844933    6489 config.go:182] Loaded profile config "embed-certs-591000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:55:42.849712    6489 out.go:177] * The control-plane node embed-certs-591000 host is not running: state=Stopped
	I1001 16:55:42.853606    6489 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-591000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-591000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000: exit status 7 (29.522834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-591000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000: exit status 7 (28.845959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-591000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-584000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-584000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.886064709s)

                                                
                                                
-- stdout --
	* [newest-cni-584000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-584000" primary control-plane node in "newest-cni-584000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-584000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:55:43.162933    6506 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:55:43.163081    6506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:43.163084    6506 out.go:358] Setting ErrFile to fd 2...
	I1001 16:55:43.163086    6506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:43.163214    6506 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:55:43.164353    6506 out.go:352] Setting JSON to false
	I1001 16:55:43.180318    6506 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5111,"bootTime":1727821832,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:55:43.180415    6506 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:55:43.185665    6506 out.go:177] * [newest-cni-584000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:55:43.192686    6506 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:55:43.192712    6506 notify.go:220] Checking for updates...
	I1001 16:55:43.199645    6506 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:55:43.201063    6506 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:55:43.204697    6506 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:55:43.207625    6506 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:55:43.210649    6506 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:55:43.213982    6506 config.go:182] Loaded profile config "default-k8s-diff-port-311000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:55:43.214047    6506 config.go:182] Loaded profile config "multinode-603000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:55:43.214095    6506 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:55:43.218650    6506 out.go:177] * Using the qemu2 driver based on user configuration
	I1001 16:55:43.225609    6506 start.go:297] selected driver: qemu2
	I1001 16:55:43.225614    6506 start.go:901] validating driver "qemu2" against <nil>
	I1001 16:55:43.225628    6506 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:55:43.227742    6506 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1001 16:55:43.227783    6506 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1001 16:55:43.236647    6506 out.go:177] * Automatically selected the socket_vmnet network
	I1001 16:55:43.239748    6506 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1001 16:55:43.239768    6506 cni.go:84] Creating CNI manager for ""
	I1001 16:55:43.239794    6506 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:55:43.239798    6506 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 16:55:43.239830    6506 start.go:340] cluster config:
	{Name:newest-cni-584000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:55:43.243650    6506 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:43.252551    6506 out.go:177] * Starting "newest-cni-584000" primary control-plane node in "newest-cni-584000" cluster
	I1001 16:55:43.256609    6506 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:55:43.256623    6506 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:55:43.256636    6506 cache.go:56] Caching tarball of preloaded images
	I1001 16:55:43.256703    6506 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:55:43.256709    6506 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:55:43.256782    6506 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/newest-cni-584000/config.json ...
	I1001 16:55:43.256798    6506 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/newest-cni-584000/config.json: {Name:mk752a4c1b0b2df5d4d8bff2934cf60efb356284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 16:55:43.257026    6506 start.go:360] acquireMachinesLock for newest-cni-584000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:43.257061    6506 start.go:364] duration metric: took 29.041µs to acquireMachinesLock for "newest-cni-584000"
	I1001 16:55:43.257074    6506 start.go:93] Provisioning new machine with config: &{Name:newest-cni-584000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:newest-cni-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:55:43.257112    6506 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:55:43.264610    6506 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 16:55:43.283827    6506 start.go:159] libmachine.API.Create for "newest-cni-584000" (driver="qemu2")
	I1001 16:55:43.283860    6506 client.go:168] LocalClient.Create starting
	I1001 16:55:43.283943    6506 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:55:43.283988    6506 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:43.284003    6506 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:43.284050    6506 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:55:43.284076    6506 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:43.284085    6506 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:43.284537    6506 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:55:43.446992    6506 main.go:141] libmachine: Creating SSH key...
	I1001 16:55:43.506092    6506 main.go:141] libmachine: Creating Disk image...
	I1001 16:55:43.506099    6506 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:55:43.506334    6506 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/disk.qcow2
	I1001 16:55:43.515533    6506 main.go:141] libmachine: STDOUT: 
	I1001 16:55:43.515557    6506 main.go:141] libmachine: STDERR: 
	I1001 16:55:43.515617    6506 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/disk.qcow2 +20000M
	I1001 16:55:43.523592    6506 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:55:43.523624    6506 main.go:141] libmachine: STDERR: 
	I1001 16:55:43.523637    6506 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/disk.qcow2
	I1001 16:55:43.523641    6506 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:55:43.523651    6506 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:43.523679    6506 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:ca:c8:e5:5e:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/disk.qcow2
	I1001 16:55:43.525359    6506 main.go:141] libmachine: STDOUT: 
	I1001 16:55:43.525380    6506 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:43.525401    6506 client.go:171] duration metric: took 241.53875ms to LocalClient.Create
	I1001 16:55:45.527610    6506 start.go:128] duration metric: took 2.270514583s to createHost
	I1001 16:55:45.527655    6506 start.go:83] releasing machines lock for "newest-cni-584000", held for 2.270620959s
	W1001 16:55:45.527719    6506 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:45.550101    6506 out.go:177] * Deleting "newest-cni-584000" in qemu2 ...
	W1001 16:55:45.605517    6506 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:45.605549    6506 start.go:729] Will try again in 5 seconds ...
	I1001 16:55:50.607689    6506 start.go:360] acquireMachinesLock for newest-cni-584000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:50.607992    6506 start.go:364] duration metric: took 223.459µs to acquireMachinesLock for "newest-cni-584000"
	I1001 16:55:50.608063    6506 start.go:93] Provisioning new machine with config: &{Name:newest-cni-584000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:newest-cni-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1001 16:55:50.608228    6506 start.go:125] createHost starting for "" (driver="qemu2")
	I1001 16:55:50.612786    6506 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1001 16:55:50.655237    6506 start.go:159] libmachine.API.Create for "newest-cni-584000" (driver="qemu2")
	I1001 16:55:50.655299    6506 client.go:168] LocalClient.Create starting
	I1001 16:55:50.655455    6506 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/ca.pem
	I1001 16:55:50.655524    6506 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:50.655545    6506 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:50.655627    6506 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19740-1141/.minikube/certs/cert.pem
	I1001 16:55:50.655679    6506 main.go:141] libmachine: Decoding PEM data...
	I1001 16:55:50.655697    6506 main.go:141] libmachine: Parsing certificate...
	I1001 16:55:50.656377    6506 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1001 16:55:50.833135    6506 main.go:141] libmachine: Creating SSH key...
	I1001 16:55:50.951676    6506 main.go:141] libmachine: Creating Disk image...
	I1001 16:55:50.951685    6506 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1001 16:55:50.951945    6506 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/disk.qcow2.raw /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/disk.qcow2
	I1001 16:55:50.961563    6506 main.go:141] libmachine: STDOUT: 
	I1001 16:55:50.961579    6506 main.go:141] libmachine: STDERR: 
	I1001 16:55:50.961628    6506 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/disk.qcow2 +20000M
	I1001 16:55:50.969458    6506 main.go:141] libmachine: STDOUT: Image resized.
	
	I1001 16:55:50.969473    6506 main.go:141] libmachine: STDERR: 
	I1001 16:55:50.969483    6506 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/disk.qcow2
	I1001 16:55:50.969488    6506 main.go:141] libmachine: Starting QEMU VM...
	I1001 16:55:50.969498    6506 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:50.969529    6506 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:e1:8b:55:0a:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/disk.qcow2
	I1001 16:55:50.971154    6506 main.go:141] libmachine: STDOUT: 
	I1001 16:55:50.971167    6506 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:50.971180    6506 client.go:171] duration metric: took 315.880041ms to LocalClient.Create
	I1001 16:55:52.973322    6506 start.go:128] duration metric: took 2.365111334s to createHost
	I1001 16:55:52.973379    6506 start.go:83] releasing machines lock for "newest-cni-584000", held for 2.365408542s
	W1001 16:55:52.973740    6506 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-584000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-584000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:52.990409    6506 out.go:201] 
	W1001 16:55:52.993441    6506 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:55:52.993464    6506 out.go:270] * 
	* 
	W1001 16:55:52.996207    6506 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:55:53.010321    6506 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-584000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000: exit status 7 (66.572083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-584000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-311000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000: exit status 7 (31.452958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-311000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-311000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-311000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-311000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.8265ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-311000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-311000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000: exit status 7 (29.376792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-311000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-311000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000: exit status 7 (29.323334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-311000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-311000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-311000 --alsologtostderr -v=1: exit status 83 (50.138208ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-311000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-311000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:55:45.842874    6528 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:55:45.843010    6528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:45.843014    6528 out.go:358] Setting ErrFile to fd 2...
	I1001 16:55:45.843016    6528 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:45.843167    6528 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:55:45.843391    6528 out.go:352] Setting JSON to false
	I1001 16:55:45.843402    6528 mustload.go:65] Loading cluster: default-k8s-diff-port-311000
	I1001 16:55:45.843622    6528 config.go:182] Loaded profile config "default-k8s-diff-port-311000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:55:45.847335    6528 out.go:177] * The control-plane node default-k8s-diff-port-311000 host is not running: state=Stopped
	I1001 16:55:45.860469    6528 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-311000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-311000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000: exit status 7 (29.72925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-311000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000: exit status 7 (29.125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-311000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-584000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-584000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.18667075s)

                                                
                                                
-- stdout --
	* [newest-cni-584000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-584000" primary control-plane node in "newest-cni-584000" cluster
	* Restarting existing qemu2 VM for "newest-cni-584000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-584000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:55:56.169161    6578 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:55:56.169265    6578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:56.169269    6578 out.go:358] Setting ErrFile to fd 2...
	I1001 16:55:56.169272    6578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:55:56.169402    6578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:55:56.170375    6578 out.go:352] Setting JSON to false
	I1001 16:55:56.186220    6578 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5124,"bootTime":1727821832,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:55:56.186287    6578 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:55:56.190151    6578 out.go:177] * [newest-cni-584000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:55:56.197304    6578 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:55:56.197367    6578 notify.go:220] Checking for updates...
	I1001 16:55:56.204194    6578 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:55:56.207215    6578 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:55:56.215303    6578 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:55:56.218227    6578 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:55:56.221272    6578 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:55:56.224522    6578 config.go:182] Loaded profile config "newest-cni-584000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:55:56.224793    6578 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:55:56.228222    6578 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 16:55:56.235159    6578 start.go:297] selected driver: qemu2
	I1001 16:55:56.235165    6578 start.go:901] validating driver "qemu2" against &{Name:newest-cni-584000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:newest-cni-584000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Li
stenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:55:56.235233    6578 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:55:56.237403    6578 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1001 16:55:56.237426    6578 cni.go:84] Creating CNI manager for ""
	I1001 16:55:56.237448    6578 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 16:55:56.237482    6578 start.go:340] cluster config:
	{Name:newest-cni-584000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-584000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:55:56.241082    6578 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 16:55:56.248126    6578 out.go:177] * Starting "newest-cni-584000" primary control-plane node in "newest-cni-584000" cluster
	I1001 16:55:56.252219    6578 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 16:55:56.252234    6578 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 16:55:56.252242    6578 cache.go:56] Caching tarball of preloaded images
	I1001 16:55:56.252305    6578 preload.go:172] Found /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 16:55:56.252311    6578 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1001 16:55:56.252380    6578 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/newest-cni-584000/config.json ...
	I1001 16:55:56.252939    6578 start.go:360] acquireMachinesLock for newest-cni-584000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:55:56.252971    6578 start.go:364] duration metric: took 25.333µs to acquireMachinesLock for "newest-cni-584000"
	I1001 16:55:56.252979    6578 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:55:56.252985    6578 fix.go:54] fixHost starting: 
	I1001 16:55:56.253120    6578 fix.go:112] recreateIfNeeded on newest-cni-584000: state=Stopped err=<nil>
	W1001 16:55:56.253128    6578 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:55:56.257146    6578 out.go:177] * Restarting existing qemu2 VM for "newest-cni-584000" ...
	I1001 16:55:56.265248    6578 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:55:56.265281    6578 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:e1:8b:55:0a:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/disk.qcow2
	I1001 16:55:56.267180    6578 main.go:141] libmachine: STDOUT: 
	I1001 16:55:56.267197    6578 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:55:56.267227    6578 fix.go:56] duration metric: took 14.241917ms for fixHost
	I1001 16:55:56.267231    6578 start.go:83] releasing machines lock for "newest-cni-584000", held for 14.256625ms
	W1001 16:55:56.267238    6578 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:55:56.267281    6578 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:55:56.267286    6578 start.go:729] Will try again in 5 seconds ...
	I1001 16:56:01.269422    6578 start.go:360] acquireMachinesLock for newest-cni-584000: {Name:mk4df8cf81b0f17518d4f6967beb92a5f587e7b6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1001 16:56:01.269902    6578 start.go:364] duration metric: took 343.125µs to acquireMachinesLock for "newest-cni-584000"
	I1001 16:56:01.270058    6578 start.go:96] Skipping create...Using existing machine configuration
	I1001 16:56:01.270078    6578 fix.go:54] fixHost starting: 
	I1001 16:56:01.270861    6578 fix.go:112] recreateIfNeeded on newest-cni-584000: state=Stopped err=<nil>
	W1001 16:56:01.270886    6578 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 16:56:01.279544    6578 out.go:177] * Restarting existing qemu2 VM for "newest-cni-584000" ...
	I1001 16:56:01.282620    6578 qemu.go:418] Using hvf for hardware acceleration
	I1001 16:56:01.282915    6578 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:e1:8b:55:0a:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19740-1141/.minikube/machines/newest-cni-584000/disk.qcow2
	I1001 16:56:01.292825    6578 main.go:141] libmachine: STDOUT: 
	I1001 16:56:01.292876    6578 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1001 16:56:01.292959    6578 fix.go:56] duration metric: took 22.881542ms for fixHost
	I1001 16:56:01.292974    6578 start.go:83] releasing machines lock for "newest-cni-584000", held for 23.05075ms
	W1001 16:56:01.293181    6578 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-584000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-584000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1001 16:56:01.300557    6578 out.go:201] 
	W1001 16:56:01.304414    6578 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1001 16:56:01.304458    6578 out.go:270] * 
	* 
	W1001 16:56:01.307030    6578 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 16:56:01.314513    6578 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-584000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000: exit status 7 (67.916791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-584000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-584000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000: exit status 7 (30.014ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-584000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-584000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-584000 --alsologtostderr -v=1: exit status 83 (41.546458ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-584000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-584000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:56:01.499485    6592 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:56:01.499656    6592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:56:01.499659    6592 out.go:358] Setting ErrFile to fd 2...
	I1001 16:56:01.499661    6592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:56:01.499801    6592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:56:01.500014    6592 out.go:352] Setting JSON to false
	I1001 16:56:01.500022    6592 mustload.go:65] Loading cluster: newest-cni-584000
	I1001 16:56:01.500241    6592 config.go:182] Loaded profile config "newest-cni-584000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:56:01.504326    6592 out.go:177] * The control-plane node newest-cni-584000 host is not running: state=Stopped
	I1001 16:56:01.508360    6592 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-584000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-584000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000: exit status 7 (30.283958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-584000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000: exit status 7 (30.133334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-584000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (153/273)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 18.52
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.1
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 255.88
29 TestAddons/serial/Volcano 37.9
31 TestAddons/serial/GCPAuth/Namespaces 0.08
33 TestAddons/parallel/Registry 20.5
34 TestAddons/parallel/Ingress 18.04
35 TestAddons/parallel/InspektorGadget 10.36
36 TestAddons/parallel/MetricsServer 6.3
38 TestAddons/parallel/CSI 29.3
39 TestAddons/parallel/Headlamp 16.64
40 TestAddons/parallel/CloudSpanner 5.19
41 TestAddons/parallel/LocalPath 51.99
42 TestAddons/parallel/NvidiaDevicePlugin 5.2
43 TestAddons/parallel/Yakd 11.39
44 TestAddons/StoppedEnableDisable 12.41
52 TestHyperKitDriverInstallOrUpdate 12.27
55 TestErrorSpam/setup 35.05
56 TestErrorSpam/start 0.35
57 TestErrorSpam/status 0.23
58 TestErrorSpam/pause 0.67
59 TestErrorSpam/unpause 0.64
60 TestErrorSpam/stop 64.33
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 73.28
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 38.28
67 TestFunctional/serial/KubeContext 0.03
68 TestFunctional/serial/KubectlGetPods 0.04
71 TestFunctional/serial/CacheCmd/cache/add_remote 9.23
72 TestFunctional/serial/CacheCmd/cache/add_local 1.2
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.03
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
76 TestFunctional/serial/CacheCmd/cache/cache_reload 2.93
77 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/serial/MinikubeKubectlCmd 2.24
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.03
80 TestFunctional/serial/ExtraConfig 38.07
81 TestFunctional/serial/ComponentHealth 0.04
82 TestFunctional/serial/LogsCmd 0.64
83 TestFunctional/serial/LogsFileCmd 0.6
84 TestFunctional/serial/InvalidService 4.42
86 TestFunctional/parallel/ConfigCmd 0.23
87 TestFunctional/parallel/DashboardCmd 11.77
88 TestFunctional/parallel/DryRun 0.23
89 TestFunctional/parallel/InternationalLanguage 0.11
90 TestFunctional/parallel/StatusCmd 0.24
95 TestFunctional/parallel/AddonsCmd 0.1
96 TestFunctional/parallel/PersistentVolumeClaim 25.66
98 TestFunctional/parallel/SSHCmd 0.13
99 TestFunctional/parallel/CpCmd 0.41
101 TestFunctional/parallel/FileSync 0.07
102 TestFunctional/parallel/CertSync 0.4
106 TestFunctional/parallel/NodeLabels 0.04
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.06
110 TestFunctional/parallel/License 1.39
111 TestFunctional/parallel/Version/short 0.04
112 TestFunctional/parallel/Version/components 0.18
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.1
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.09
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.1
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
117 TestFunctional/parallel/ImageCommands/ImageBuild 4.95
118 TestFunctional/parallel/ImageCommands/Setup 1.78
119 TestFunctional/parallel/DockerEnv/bash 0.29
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.45
124 TestFunctional/parallel/ServiceCmd/DeployApp 13.09
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.35
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.13
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.14
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.28
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.17
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.76
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.11
136 TestFunctional/parallel/ServiceCmd/List 0.12
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.09
139 TestFunctional/parallel/ServiceCmd/Format 0.1
140 TestFunctional/parallel/ServiceCmd/URL 0.1
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
145 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
148 TestFunctional/parallel/ProfileCmd/profile_list 0.13
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.13
150 TestFunctional/parallel/MountCmd/any-port 10.09
151 TestFunctional/parallel/MountCmd/specific-port 1.81
152 TestFunctional/parallel/MountCmd/VerifyCleanup 0.92
153 TestFunctional/delete_echo-server_images 0.05
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 238.81
160 TestMultiControlPlane/serial/DeployApp 9.47
161 TestMultiControlPlane/serial/PingHostFromPods 0.75
162 TestMultiControlPlane/serial/AddWorkerNode 86.17
163 TestMultiControlPlane/serial/NodeLabels 0.13
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.31
165 TestMultiControlPlane/serial/CopyFile 4.2
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 3.2
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
211 TestMainNoArgs 0.03
258 TestStoppedBinaryUpgrade/Setup 4.67
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
275 TestNoKubernetes/serial/ProfileList 31.3
276 TestNoKubernetes/serial/Stop 3.52
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
280 TestStoppedBinaryUpgrade/MinikubeLogs 0.7
295 TestStartStop/group/old-k8s-version/serial/Stop 3.29
298 TestStartStop/group/no-preload/serial/Stop 3.66
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
301 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
317 TestStartStop/group/embed-certs/serial/Stop 3.6
320 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.51
321 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
337 TestStartStop/group/newest-cni/serial/Stop 2.87
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1001 15:47:27.003527    1659 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1001 15:47:27.004009    1659 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-065000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-065000: exit status 85 (97.509792ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-065000 | jenkins | v1.34.0 | 01 Oct 24 15:46 PDT |          |
	|         | -p download-only-065000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 15:46:47
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 15:46:47.212069    1660 out.go:345] Setting OutFile to fd 1 ...
	I1001 15:46:47.212213    1660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 15:46:47.212216    1660 out.go:358] Setting ErrFile to fd 2...
	I1001 15:46:47.212219    1660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 15:46:47.212350    1660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	W1001 15:46:47.212430    1660 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19740-1141/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19740-1141/.minikube/config/config.json: no such file or directory
	I1001 15:46:47.213735    1660 out.go:352] Setting JSON to true
	I1001 15:46:47.231014    1660 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":975,"bootTime":1727821832,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 15:46:47.231081    1660 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 15:46:47.235835    1660 out.go:97] [download-only-065000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 15:46:47.235983    1660 notify.go:220] Checking for updates...
	W1001 15:46:47.236070    1660 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball: no such file or directory
	I1001 15:46:47.239537    1660 out.go:169] MINIKUBE_LOCATION=19740
	I1001 15:46:47.242642    1660 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 15:46:47.247517    1660 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 15:46:47.250672    1660 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 15:46:47.254665    1660 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	W1001 15:46:47.258677    1660 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 15:46:47.258916    1660 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 15:46:47.263695    1660 out.go:97] Using the qemu2 driver based on user configuration
	I1001 15:46:47.263715    1660 start.go:297] selected driver: qemu2
	I1001 15:46:47.263731    1660 start.go:901] validating driver "qemu2" against <nil>
	I1001 15:46:47.263825    1660 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 15:46:47.267693    1660 out.go:169] Automatically selected the socket_vmnet network
	I1001 15:46:47.273115    1660 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1001 15:46:47.273219    1660 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 15:46:47.273264    1660 cni.go:84] Creating CNI manager for ""
	I1001 15:46:47.273298    1660 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1001 15:46:47.273343    1660 start.go:340] cluster config:
	{Name:download-only-065000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-065000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 15:46:47.277012    1660 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 15:46:47.280719    1660 out.go:97] Downloading VM boot image ...
	I1001 15:46:47.280740    1660 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I1001 15:47:05.035724    1660 out.go:97] Starting "download-only-065000" primary control-plane node in "download-only-065000" cluster
	I1001 15:47:05.035745    1660 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 15:47:05.315497    1660 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1001 15:47:05.315584    1660 cache.go:56] Caching tarball of preloaded images
	I1001 15:47:05.316447    1660 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 15:47:05.321918    1660 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1001 15:47:05.321944    1660 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1001 15:47:05.889505    1660 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1001 15:47:25.400974    1660 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1001 15:47:25.401157    1660 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1001 15:47:26.096493    1660 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1001 15:47:26.096699    1660 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/download-only-065000/config.json ...
	I1001 15:47:26.096720    1660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/download-only-065000/config.json: {Name:mke95ab2104e60b276ab470a74508d6d1fa617da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 15:47:26.096973    1660 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1001 15:47:26.097168    1660 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1001 15:47:26.957027    1660 out.go:193] 
	W1001 15:47:26.962015    1660 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19740-1141/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1088256c0 0x1088256c0 0x1088256c0 0x1088256c0 0x1088256c0 0x1088256c0 0x1088256c0] Decompressors:map[bz2:0x140004878a0 gz:0x140004878a8 tar:0x14000487850 tar.bz2:0x14000487860 tar.gz:0x14000487870 tar.xz:0x14000487880 tar.zst:0x14000487890 tbz2:0x14000487860 tgz:0x14000487870 txz:0x14000487880 tzst:0x14000487890 xz:0x140004878d0 zip:0x140004878e0 zst:0x140004878d8] Getters:map[file:0x1400093a780 http:0x14000980910 https:0x14000980960] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1001 15:47:26.962046    1660 out_reason.go:110] 
	W1001 15:47:26.969923    1660 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 15:47:26.972907    1660 out.go:193] 
	
	
	* The control-plane node download-only-065000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-065000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-065000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (18.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-063000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-063000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (18.515691625s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (18.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1001 15:47:45.881721    1659 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1001 15:47:45.881777    1659 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-063000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-063000: exit status 85 (81.640125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-065000 | jenkins | v1.34.0 | 01 Oct 24 15:46 PDT |                     |
	|         | -p download-only-065000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 01 Oct 24 15:47 PDT | 01 Oct 24 15:47 PDT |
	| delete  | -p download-only-065000        | download-only-065000 | jenkins | v1.34.0 | 01 Oct 24 15:47 PDT | 01 Oct 24 15:47 PDT |
	| start   | -o=json --download-only        | download-only-063000 | jenkins | v1.34.0 | 01 Oct 24 15:47 PDT |                     |
	|         | -p download-only-063000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 15:47:27
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 15:47:27.393312    1689 out.go:345] Setting OutFile to fd 1 ...
	I1001 15:47:27.393444    1689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 15:47:27.393448    1689 out.go:358] Setting ErrFile to fd 2...
	I1001 15:47:27.393450    1689 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 15:47:27.393591    1689 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 15:47:27.394609    1689 out.go:352] Setting JSON to true
	I1001 15:47:27.410612    1689 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1015,"bootTime":1727821832,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 15:47:27.410685    1689 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 15:47:27.414317    1689 out.go:97] [download-only-063000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 15:47:27.414420    1689 notify.go:220] Checking for updates...
	I1001 15:47:27.419007    1689 out.go:169] MINIKUBE_LOCATION=19740
	I1001 15:47:27.422171    1689 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 15:47:27.425190    1689 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 15:47:27.428070    1689 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 15:47:27.432155    1689 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	W1001 15:47:27.438178    1689 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 15:47:27.438392    1689 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 15:47:27.441149    1689 out.go:97] Using the qemu2 driver based on user configuration
	I1001 15:47:27.441159    1689 start.go:297] selected driver: qemu2
	I1001 15:47:27.441163    1689 start.go:901] validating driver "qemu2" against <nil>
	I1001 15:47:27.441224    1689 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 15:47:27.444189    1689 out.go:169] Automatically selected the socket_vmnet network
	I1001 15:47:27.449084    1689 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1001 15:47:27.449178    1689 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 15:47:27.449196    1689 cni.go:84] Creating CNI manager for ""
	I1001 15:47:27.449219    1689 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1001 15:47:27.449227    1689 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1001 15:47:27.449267    1689 start.go:340] cluster config:
	{Name:download-only-063000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 15:47:27.452626    1689 iso.go:125] acquiring lock: {Name:mk2e6128dd9f7d36ae0a81872c582d2694baaef0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 15:47:27.456170    1689 out.go:97] Starting "download-only-063000" primary control-plane node in "download-only-063000" cluster
	I1001 15:47:27.456177    1689 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 15:47:28.576659    1689 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1001 15:47:28.576735    1689 cache.go:56] Caching tarball of preloaded images
	I1001 15:47:28.577518    1689 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1001 15:47:28.583016    1689 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1001 15:47:28.583057    1689 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I1001 15:47:29.145774    1689 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19740-1141/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-063000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-063000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-063000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:932: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-356000
addons_test.go:932: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-356000: exit status 85 (56.041ms)

                                                
                                                
-- stdout --
	* Profile "addons-356000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-356000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:943: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-356000
addons_test.go:943: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-356000: exit status 85 (59.8345ms)

                                                
                                                
-- stdout --
	* Profile "addons-356000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-356000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (255.88s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-356000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-356000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (4m15.88324125s)
--- PASS: TestAddons/Setup (255.88s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.9s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: volcano-controller stabilized in 7.071416ms
addons_test.go:801: volcano-scheduler stabilized in 7.207083ms
addons_test.go:809: volcano-admission stabilized in 7.271625ms
addons_test.go:823: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-k8m6f" [5ecf3fe0-ea8f-482a-a45b-25f9b9d859cf] Running
addons_test.go:823: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.007756333s
addons_test.go:827: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-xv9hb" [690c5d29-e30b-486a-b164-b0ff2aa9a747] Running
addons_test.go:827: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00367875s
addons_test.go:831: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-67l27" [240a617e-31a2-4968-853d-e1c537d4a56f] Running
addons_test.go:831: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.005934583s
addons_test.go:836: (dbg) Run:  kubectl --context addons-356000 delete -n volcano-system job volcano-admission-init
addons_test.go:842: (dbg) Run:  kubectl --context addons-356000 create -f testdata/vcjob.yaml
addons_test.go:850: (dbg) Run:  kubectl --context addons-356000 get vcjob -n my-volcano
addons_test.go:868: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [d4a1446d-05db-4abf-9901-a7843e2d70fd] Pending
helpers_test.go:344: "test-job-nginx-0" [d4a1446d-05db-4abf-9901-a7843e2d70fd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [d4a1446d-05db-4abf-9901-a7843e2d70fd] Running
addons_test.go:868: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.007054375s
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-356000 addons disable volcano --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-darwin-arm64 -p addons-356000 addons disable volcano --alsologtostderr -v=1: (10.636121917s)
--- PASS: TestAddons/serial/Volcano (37.90s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-356000 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-356000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 1.347083ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-pl6s5" [94674da1-0833-4096-b05c-c12aa26bf5ea] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003550625s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-58lhr" [abec4070-80a4-44ba-af2e-74473997360f] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004337833s
addons_test.go:331: (dbg) Run:  kubectl --context addons-356000 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-356000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-356000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.220745083s)
addons_test.go:350: (dbg) Run:  out/minikube-darwin-arm64 -p addons-356000 ip
2024/10/01 16:01:10 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-356000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.50s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-356000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-356000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-356000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [26492d3e-e4e5-403e-83b6-4c4b8257c305] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [26492d3e-e4e5-403e-83b6-4c4b8257c305] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.007304875s
I1001 16:02:17.083757    1659 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-darwin-arm64 -p addons-356000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-356000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-arm64 -p addons-356000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-356000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-darwin-arm64 -p addons-356000 addons disable ingress-dns --alsologtostderr -v=1: (1.146589542s)
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-356000 addons disable ingress --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-darwin-arm64 -p addons-356000 addons disable ingress --alsologtostderr -v=1: (7.250466458s)
--- PASS: TestAddons/parallel/Ingress (18.04s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.36s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:756: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rs8q4" [47a31f20-6718-4ffc-bb80-eaff8e308719] Running
addons_test.go:756: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012172708s
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-356000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-darwin-arm64 -p addons-356000 addons disable inspektor-gadget --alsologtostderr -v=1: (5.344876s)
--- PASS: TestAddons/parallel/InspektorGadget (10.36s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 1.269625ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-q8snp" [31aba60f-62d5-44c4-b2a0-1adabec6a8e8] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.011412792s
addons_test.go:402: (dbg) Run:  kubectl --context addons-356000 top pods -n kube-system
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-356000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (29.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1001 16:01:29.580869    1659 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1001 16:01:29.583372    1659 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1001 16:01:29.583381    1659 kapi.go:107] duration metric: took 2.54875ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 2.552291ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-356000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-356000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-356000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-356000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-356000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-356000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [04ec98db-bd45-4428-884e-dd930bd90a95] Pending
helpers_test.go:344: "task-pv-pod" [04ec98db-bd45-4428-884e-dd930bd90a95] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [04ec98db-bd45-4428-884e-dd930bd90a95] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.005347459s
addons_test.go:511: (dbg) Run:  kubectl --context addons-356000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-356000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-356000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-356000 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-356000 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-356000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-356000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-356000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-356000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-356000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3d726fb8-f643-46cf-9d69-33ce0248596b] Pending
helpers_test.go:344: "task-pv-pod-restore" [3d726fb8-f643-46cf-9d69-33ce0248596b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3d726fb8-f643-46cf-9d69-33ce0248596b] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005737541s
addons_test.go:553: (dbg) Run:  kubectl --context addons-356000 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-356000 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-356000 delete volumesnapshot new-snapshot-demo
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-356000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-356000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-darwin-arm64 -p addons-356000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.099751291s)
--- PASS: TestAddons/parallel/CSI (29.30s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:741: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-356000 --alsologtostderr -v=1
addons_test.go:746: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-mcspf" [5ce15514-9dc6-4909-9206-5faf0294b545] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-mcspf" [5ce15514-9dc6-4909-9206-5faf0294b545] Running
addons_test.go:746: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.009425167s
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-356000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-darwin-arm64 -p addons-356000 addons disable headlamp --alsologtostderr -v=1: (5.273134417s)
--- PASS: TestAddons/parallel/Headlamp (16.64s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:773: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-x6szc" [e829e85f-1086-4919-a4c3-7147702c8200] Running
addons_test.go:773: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003901875s
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-356000 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.19s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.99s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:881: (dbg) Run:  kubectl --context addons-356000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:887: (dbg) Run:  kubectl --context addons-356000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:891: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-356000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-356000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-356000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-356000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-356000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [6b790ae4-154e-4a2d-bd19-098536d384bd] Pending
helpers_test.go:344: "test-local-path" [6b790ae4-154e-4a2d-bd19-098536d384bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [6b790ae4-154e-4a2d-bd19-098536d384bd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [6b790ae4-154e-4a2d-bd19-098536d384bd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.010072s
addons_test.go:899: (dbg) Run:  kubectl --context addons-356000 get pvc test-pvc -o=json
addons_test.go:908: (dbg) Run:  out/minikube-darwin-arm64 -p addons-356000 ssh "cat /opt/local-path-provisioner/pvc-970cd860-2994-452a-83d5-317c3cbeb7e7_default_test-pvc/file1"
addons_test.go:920: (dbg) Run:  kubectl --context addons-356000 delete pod test-local-path
addons_test.go:924: (dbg) Run:  kubectl --context addons-356000 delete pvc test-pvc
addons_test.go:977: (dbg) Run:  out/minikube-darwin-arm64 -p addons-356000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-darwin-arm64 -p addons-356000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.450494s)
--- PASS: TestAddons/parallel/LocalPath (51.99s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.2s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:956: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-cjbzd" [c1c0ea8c-b78c-485a-9b8e-9b938c1cef00] Running
addons_test.go:956: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.010472708s
addons_test.go:959: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-356000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.20s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:967: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-l9fvt" [041ba69b-22ac-422d-9306-333b4a3e4707] Running
addons_test.go:967: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.01110175s
addons_test.go:971: (dbg) Run:  out/minikube-darwin-arm64 -p addons-356000 addons disable yakd --alsologtostderr -v=1
addons_test.go:971: (dbg) Done: out/minikube-darwin-arm64 -p addons-356000 addons disable yakd --alsologtostderr -v=1: (5.379076375s)
--- PASS: TestAddons/parallel/Yakd (11.39s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-356000
addons_test.go:170: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-356000: (12.217160458s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-356000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-356000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-356000
--- PASS: TestAddons/StoppedEnableDisable (12.41s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (12.27s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1001 16:40:08.960952    1659 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1001 16:40:08.961154    1659 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1001 16:40:11.609244    1659 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1001 16:40:11.609467    1659 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1001 16:40:11.609521    1659 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/001/docker-machine-driver-hyperkit
I1001 16:40:12.128459    1659 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1048f2d40 0x1048f2d40 0x1048f2d40 0x1048f2d40 0x1048f2d40 0x1048f2d40 0x1048f2d40] Decompressors:map[bz2:0x14000482db0 gz:0x14000482db8 tar:0x14000482d60 tar.bz2:0x14000482d70 tar.gz:0x14000482d80 tar.xz:0x14000482d90 tar.zst:0x14000482da0 tbz2:0x14000482d70 tgz:0x14000482d80 txz:0x14000482d90 tzst:0x14000482da0 xz:0x14000482dc0 zip:0x14000482dd0 zst:0x14000482dc8] Getters:map[file:0x140013dd4b0 http:0x140008810e0 https:0x14000881130] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1001 16:40:12.128593    1659 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1732578170/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (12.27s)

                                                
                                    
x
+
TestErrorSpam/setup (35.05s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-392000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-392000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 --driver=qemu2 : (35.049794458s)
--- PASS: TestErrorSpam/setup (35.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.23s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 status
--- PASS: TestErrorSpam/status (0.23s)

                                                
                                    
x
+
TestErrorSpam/pause (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 pause
--- PASS: TestErrorSpam/pause (0.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 unpause
--- PASS: TestErrorSpam/unpause (0.64s)

                                                
                                    
x
+
TestErrorSpam/stop (64.33s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 stop: (12.204253875s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 stop: (26.057043541s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-392000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-392000 stop: (26.064016083s)
--- PASS: TestErrorSpam/stop (64.33s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19740-1141/.minikube/files/etc/test/nested/copy/1659/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (73.28s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-808000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-808000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (1m13.279349958s)
--- PASS: TestFunctional/serial/StartWithProxy (73.28s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1001 16:05:33.170139    1659 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-808000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-808000 --alsologtostderr -v=8: (38.283711459s)
functional_test.go:663: soft start took 38.28415025s for "functional-808000" cluster.
I1001 16:06:11.453663    1659 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (38.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-808000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-808000 cache add registry.k8s.io/pause:3.1: (3.56442825s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-808000 cache add registry.k8s.io/pause:3.3: (3.406918709s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-808000 cache add registry.k8s.io/pause:latest: (2.256989958s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-808000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local4014830935/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 cache add minikube-local-cache-test:functional-808000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 cache delete minikube-local-cache-test:functional-808000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-808000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-808000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (67.757917ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-darwin-arm64 -p functional-808000 cache reload: (2.692632209s)
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.24s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 kubectl -- --context functional-808000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-arm64 -p functional-808000 kubectl -- --context functional-808000 get pods: (2.240043042s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (2.24s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-808000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-808000 get pods: (1.026396417s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.03s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-808000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1001 16:07:02.596728    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:07:02.603660    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:07:02.617053    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:07:02.640477    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:07:02.683948    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:07:02.767437    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:07:02.930898    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:07:03.254470    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:07:03.897984    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:07:05.181780    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-808000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.065174584s)
functional_test.go:761: restart took 38.065268167s for "functional-808000" cluster.
I1001 16:07:06.428689    1659 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (38.07s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-808000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd96524313/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.42s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-808000 apply -f testdata/invalidsvc.yaml
E1001 16:07:07.744840    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-808000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-808000: exit status 115 (140.913667ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30902 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-808000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-808000 delete -f testdata/invalidsvc.yaml: (1.182134125s)
--- PASS: TestFunctional/serial/InvalidService (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-808000 config get cpus: exit status 14 (33.764208ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-808000 config get cpus: exit status 14 (30.176583ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-808000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-808000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2920: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.77s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-808000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-808000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (118.106417ms)

                                                
                                                
-- stdout --
	* [functional-808000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:08:01.523488    2903 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:08:01.523641    2903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:08:01.523645    2903 out.go:358] Setting ErrFile to fd 2...
	I1001 16:08:01.523647    2903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:08:01.523784    2903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:08:01.524987    2903 out.go:352] Setting JSON to false
	I1001 16:08:01.544549    2903 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2249,"bootTime":1727821832,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:08:01.544626    2903 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:08:01.548961    2903 out.go:177] * [functional-808000] minikube v1.34.0 on Darwin 14.5 (arm64)
	I1001 16:08:01.556023    2903 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:08:01.556061    2903 notify.go:220] Checking for updates...
	I1001 16:08:01.563899    2903 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:08:01.566930    2903 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:08:01.570036    2903 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:08:01.573026    2903 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:08:01.574442    2903 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:08:01.578299    2903 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:08:01.578558    2903 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:08:01.582947    2903 out.go:177] * Using the qemu2 driver based on existing profile
	I1001 16:08:01.587901    2903 start.go:297] selected driver: qemu2
	I1001 16:08:01.587907    2903 start.go:901] validating driver "qemu2" against &{Name:functional-808000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:functional-808000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:08:01.587972    2903 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:08:01.595000    2903 out.go:201] 
	W1001 16:08:01.599034    2903 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1001 16:08:01.602882    2903 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-808000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-808000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-808000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.776708ms)

                                                
                                                
-- stdout --
	* [functional-808000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 16:08:01.748712    2914 out.go:345] Setting OutFile to fd 1 ...
	I1001 16:08:01.748826    2914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:08:01.748829    2914 out.go:358] Setting ErrFile to fd 2...
	I1001 16:08:01.748832    2914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 16:08:01.748970    2914 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
	I1001 16:08:01.750410    2914 out.go:352] Setting JSON to false
	I1001 16:08:01.768128    2914 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2249,"bootTime":1727821832,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1001 16:08:01.768207    2914 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1001 16:08:01.773033    2914 out.go:177] * [functional-808000] minikube v1.34.0 sur Darwin 14.5 (arm64)
	I1001 16:08:01.779912    2914 out.go:177]   - MINIKUBE_LOCATION=19740
	I1001 16:08:01.780011    2914 notify.go:220] Checking for updates...
	I1001 16:08:01.787854    2914 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	I1001 16:08:01.790958    2914 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1001 16:08:01.794023    2914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 16:08:01.796913    2914 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	I1001 16:08:01.803728    2914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 16:08:01.807229    2914 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1001 16:08:01.807497    2914 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 16:08:01.811934    2914 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1001 16:08:01.816872    2914 start.go:297] selected driver: qemu2
	I1001 16:08:01.816877    2914 start.go:901] validating driver "qemu2" against &{Name:functional-808000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:functional-808000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpira
tion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 16:08:01.816922    2914 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 16:08:01.822928    2914 out.go:201] 
	W1001 16:08:01.826896    2914 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1001 16:08:01.830885    2914 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [37e0aa19-c2af-4e8d-8eda-b5ff604af931] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00814575s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-808000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-808000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-808000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-808000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [115feb62-41e6-4dd2-ba94-c60bec08c1df] Pending
helpers_test.go:344: "sp-pod" [115feb62-41e6-4dd2-ba94-c60bec08c1df] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [115feb62-41e6-4dd2-ba94-c60bec08c1df] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.011383875s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-808000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-808000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-808000 delete -f testdata/storage-provisioner/pod.yaml: (1.130186667s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-808000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [82053ec0-21ea-4f96-8e21-609876e36853] Pending
helpers_test.go:344: "sp-pod" [82053ec0-21ea-4f96-8e21-609876e36853] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [82053ec0-21ea-4f96-8e21-609876e36853] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.013296083s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-808000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.66s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh -n functional-808000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 cp functional-808000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1087617449/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh -n functional-808000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh -n functional-808000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1659/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "sudo cat /etc/test/nested/copy/1659/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1659.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "sudo cat /etc/ssl/certs/1659.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1659.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "sudo cat /usr/share/ca-certificates/1659.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/16592.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "sudo cat /etc/ssl/certs/16592.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/16592.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "sudo cat /usr/share/ca-certificates/16592.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-808000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-808000 ssh "sudo systemctl is-active crio": exit status 1 (58.493958ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2288: (dbg) Done: out/minikube-darwin-arm64 license: (1.38534125s)
--- PASS: TestFunctional/parallel/License (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-808000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-808000
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-808000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-808000 image ls --format short --alsologtostderr:
I1001 16:08:07.649493    2967 out.go:345] Setting OutFile to fd 1 ...
I1001 16:08:07.649868    2967 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 16:08:07.649871    2967 out.go:358] Setting ErrFile to fd 2...
I1001 16:08:07.649873    2967 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 16:08:07.650000    2967 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
I1001 16:08:07.650490    2967 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 16:08:07.650558    2967 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 16:08:07.651470    2967 ssh_runner.go:195] Run: systemctl --version
I1001 16:08:07.651486    2967 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/functional-808000/id_rsa Username:docker}
I1001 16:08:07.676437    2967 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-808000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-808000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-808000 | ddef15c160bb9 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| docker.io/library/nginx                     | latest            | 6e8672ddd037e | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-808000 image ls --format table --alsologtostderr:
I1001 16:08:07.931132    2973 out.go:345] Setting OutFile to fd 1 ...
I1001 16:08:07.931312    2973 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 16:08:07.931317    2973 out.go:358] Setting ErrFile to fd 2...
I1001 16:08:07.931320    2973 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 16:08:07.931452    2973 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
I1001 16:08:07.931886    2973 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 16:08:07.931954    2973 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 16:08:07.932834    2973 ssh_runner.go:195] Run: systemctl --version
I1001 16:08:07.932843    2973 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/functional-808000/id_rsa Username:docker}
I1001 16:08:07.966439    2973 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-808000 image ls --format json --alsologtostderr:
[{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","r
epoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"ddef15c160bb9224e41385bfea7e3997d6c247d224b33560b7103b358488c5f5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-808000"],"size":"30"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-mini
kube/storage-provisioner:v5"],"size":"29000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-808000"],"size":"4780000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-808000 image ls --format json --alsologtostderr:
I1001 16:08:07.748611    2969 out.go:345] Setting OutFile to fd 1 ...
I1001 16:08:07.748759    2969 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 16:08:07.748763    2969 out.go:358] Setting ErrFile to fd 2...
I1001 16:08:07.748766    2969 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 16:08:07.748905    2969 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
I1001 16:08:07.749312    2969 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 16:08:07.749371    2969 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 16:08:07.750183    2969 ssh_runner.go:195] Run: systemctl --version
I1001 16:08:07.750191    2969 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/functional-808000/id_rsa Username:docker}
I1001 16:08:07.780329    2969 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-808000 image ls --format yaml --alsologtostderr:
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: ddef15c160bb9224e41385bfea7e3997d6c247d224b33560b7103b358488c5f5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-808000
size: "30"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-808000
size: "4780000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-808000 image ls --format yaml --alsologtostderr:
I1001 16:08:07.845384    2971 out.go:345] Setting OutFile to fd 1 ...
I1001 16:08:07.845536    2971 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 16:08:07.845541    2971 out.go:358] Setting ErrFile to fd 2...
I1001 16:08:07.845543    2971 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 16:08:07.845686    2971 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
I1001 16:08:07.846119    2971 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 16:08:07.846188    2971 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 16:08:07.847050    2971 ssh_runner.go:195] Run: systemctl --version
I1001 16:08:07.847060    2971 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/functional-808000/id_rsa Username:docker}
I1001 16:08:07.879529    2971 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-808000 ssh pgrep buildkitd: exit status 1 (67.282625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image build -t localhost/my-image:functional-808000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-808000 image build -t localhost/my-image:functional-808000 testdata/build --alsologtostderr: (4.809761416s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-808000 image build -t localhost/my-image:functional-808000 testdata/build --alsologtostderr:
I1001 16:08:08.085723    2977 out.go:345] Setting OutFile to fd 1 ...
I1001 16:08:08.085944    2977 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 16:08:08.085947    2977 out.go:358] Setting ErrFile to fd 2...
I1001 16:08:08.085950    2977 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 16:08:08.086068    2977 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19740-1141/.minikube/bin
I1001 16:08:08.086512    2977 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 16:08:08.087296    2977 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1001 16:08:08.088252    2977 ssh_runner.go:195] Run: systemctl --version
I1001 16:08:08.088261    2977 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19740-1141/.minikube/machines/functional-808000/id_rsa Username:docker}
I1001 16:08:08.111265    2977 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.4263198987.tar
I1001 16:08:08.111315    2977 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1001 16:08:08.115245    2977 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4263198987.tar
I1001 16:08:08.117161    2977 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4263198987.tar: stat -c "%s %y" /var/lib/minikube/build/build.4263198987.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4263198987.tar': No such file or directory
I1001 16:08:08.117176    2977 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.4263198987.tar --> /var/lib/minikube/build/build.4263198987.tar (3072 bytes)
I1001 16:08:08.125553    2977 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4263198987
I1001 16:08:08.128947    2977 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4263198987 -xf /var/lib/minikube/build/build.4263198987.tar
I1001 16:08:08.132161    2977 docker.go:360] Building image: /var/lib/minikube/build/build.4263198987
I1001 16:08:08.132212    2977 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-808000 /var/lib/minikube/build/build.4263198987
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 1.6s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 1.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:46c762bd58ff195bdf1037ed099a4f343f234becc332e26882e8fcf94c3a5f35 done
#8 naming to localhost/my-image:functional-808000 done
#8 DONE 0.0s
I1001 16:08:12.851259    2977 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-808000 /var/lib/minikube/build/build.4263198987: (4.719088709s)
I1001 16:08:12.851333    2977 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4263198987
I1001 16:08:12.855195    2977 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4263198987.tar
I1001 16:08:12.858488    2977 build_images.go:217] Built localhost/my-image:functional-808000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.4263198987.tar
I1001 16:08:12.858503    2977 build_images.go:133] succeeded building to: functional-808000
I1001 16:08:12.858507    2977 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image ls
2024/10/01 16:08:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
E1001 16:07:12.868368    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.768674709s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-808000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-808000 docker-env) && out/minikube-darwin-arm64 status -p functional-808000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-808000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image load --daemon kicbase/echo-server:functional-808000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-808000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-808000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-qznvn" [0b732c13-b31f-44a3-863c-bec93580e329] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-qznvn" [0b732c13-b31f-44a3-863c-bec93580e329] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E1001 16:07:23.111737    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.008006042s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image load --daemon kicbase/echo-server:functional-808000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-808000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image load --daemon kicbase/echo-server:functional-808000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image save kicbase/echo-server:functional-808000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image rm kicbase/echo-server:functional-808000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-808000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 image save --daemon kicbase/echo-server:functional-808000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-808000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-808000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-808000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-808000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2770: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-808000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-808000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-808000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [30fa4dac-c684-497b-abae-402c6f5d38d0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [30fa4dac-c684-497b-abae-402c6f5d38d0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.009837291s
I1001 16:07:31.096323    1659 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 service list -o json
functional_test.go:1494: Took "84.699875ms" to run "out/minikube-darwin-arm64 -p functional-808000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:32045
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:32045
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-808000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.117.32 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1001 16:07:31.184590    1659 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1001 16:07:31.223289    1659 config.go:182] Loaded profile config "functional-808000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-808000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "93.607958ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "34.829292ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "97.689125ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "34.394167ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-808000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2475508348/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727824074419583000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2475508348/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727824074419583000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2475508348/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727824074419583000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2475508348/001/test-1727824074419583000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-808000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.255542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 16:07:54.480325    1659 retry.go:31] will retry after 396.963117ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  1 23:07 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  1 23:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  1 23:07 test-1727824074419583000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh cat /mount-9p/test-1727824074419583000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-808000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b9874ea6-1702-4bdf-8052-da9ca0943a98] Pending
helpers_test.go:344: "busybox-mount" [b9874ea6-1702-4bdf-8052-da9ca0943a98] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b9874ea6-1702-4bdf-8052-da9ca0943a98] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b9874ea6-1702-4bdf-8052-da9ca0943a98] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.004116375s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-808000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-808000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2475508348/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-808000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2886054493/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Done: out/minikube-darwin-arm64 -p functional-808000 ssh "findmnt -T /mount-9p | grep 9p": (1.456708917s)
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-808000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2886054493/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-808000 ssh "sudo umount -f /mount-9p": exit status 1 (59.175375ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-808000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-808000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2886054493/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-808000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4087068020/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-808000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4087068020/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-808000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4087068020/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-808000 ssh "findmnt -T" /mount1: exit status 1 (66.795208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 16:08:06.384859    1659 retry.go:31] will retry after 601.981736ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-808000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-808000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-808000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4087068020/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-808000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4087068020/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-808000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4087068020/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.92s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-808000
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-808000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-808000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (238.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-056000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E1001 16:08:24.558500    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:09:46.481094    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:12:02.593665    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-056000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m58.6192225s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (238.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- rollout status deployment/busybox
E1001 16:12:14.616754    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:12:14.624379    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:12:14.636871    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:12:14.660218    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:12:14.703598    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:12:14.787035    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:12:14.950037    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:12:15.273408    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:12:15.916939    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:12:17.200355    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:12:19.762278    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-056000 -- rollout status deployment/busybox: (7.820828375s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-c6rp5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-jncwz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-t9cb4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-c6rp5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-jncwz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-t9cb4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-c6rp5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-jncwz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-t9cb4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-c6rp5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-c6rp5 -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-jncwz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-jncwz -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-t9cb4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-056000 -- exec busybox-7dff88458-t9cb4 -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (86.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-056000 -v=7 --alsologtostderr
E1001 16:12:24.885700    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:12:30.322704    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/addons-356000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:12:35.129017    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:12:55.612027    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
E1001 16:13:36.575009    1659 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19740-1141/.minikube/profiles/functional-808000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-056000 -v=7 --alsologtostderr: (1m25.94888525s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (86.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-056000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp testdata/cp-test.txt ha-056000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile2160848596/001/cp-test_ha-056000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000:/home/docker/cp-test.txt ha-056000-m02:/home/docker/cp-test_ha-056000_ha-056000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m02 "sudo cat /home/docker/cp-test_ha-056000_ha-056000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000:/home/docker/cp-test.txt ha-056000-m03:/home/docker/cp-test_ha-056000_ha-056000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m03 "sudo cat /home/docker/cp-test_ha-056000_ha-056000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000:/home/docker/cp-test.txt ha-056000-m04:/home/docker/cp-test_ha-056000_ha-056000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m04 "sudo cat /home/docker/cp-test_ha-056000_ha-056000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp testdata/cp-test.txt ha-056000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile2160848596/001/cp-test_ha-056000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m02:/home/docker/cp-test.txt ha-056000:/home/docker/cp-test_ha-056000-m02_ha-056000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000 "sudo cat /home/docker/cp-test_ha-056000-m02_ha-056000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m02:/home/docker/cp-test.txt ha-056000-m03:/home/docker/cp-test_ha-056000-m02_ha-056000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m03 "sudo cat /home/docker/cp-test_ha-056000-m02_ha-056000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m02:/home/docker/cp-test.txt ha-056000-m04:/home/docker/cp-test_ha-056000-m02_ha-056000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m04 "sudo cat /home/docker/cp-test_ha-056000-m02_ha-056000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp testdata/cp-test.txt ha-056000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile2160848596/001/cp-test_ha-056000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m03:/home/docker/cp-test.txt ha-056000:/home/docker/cp-test_ha-056000-m03_ha-056000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000 "sudo cat /home/docker/cp-test_ha-056000-m03_ha-056000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m03:/home/docker/cp-test.txt ha-056000-m02:/home/docker/cp-test_ha-056000-m03_ha-056000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m02 "sudo cat /home/docker/cp-test_ha-056000-m03_ha-056000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m03:/home/docker/cp-test.txt ha-056000-m04:/home/docker/cp-test_ha-056000-m03_ha-056000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m04 "sudo cat /home/docker/cp-test_ha-056000-m03_ha-056000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp testdata/cp-test.txt ha-056000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile2160848596/001/cp-test_ha-056000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m04:/home/docker/cp-test.txt ha-056000:/home/docker/cp-test_ha-056000-m04_ha-056000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000 "sudo cat /home/docker/cp-test_ha-056000-m04_ha-056000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m04:/home/docker/cp-test.txt ha-056000-m02:/home/docker/cp-test_ha-056000-m04_ha-056000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m02 "sudo cat /home/docker/cp-test_ha-056000-m04_ha-056000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 cp ha-056000-m04:/home/docker/cp-test.txt ha-056000-m03:/home/docker/cp-test_ha-056000-m04_ha-056000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-056000 ssh -n ha-056000-m03 "sudo cat /home/docker/cp-test_ha-056000-m04_ha-056000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.20s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.2s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-906000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-906000 --output=json --user=testUser: (3.199781709s)
--- PASS: TestJSONOutput/stop/Command (3.20s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-469000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-469000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (101.771ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ccb0397b-9af7-4b5b-92b3-821311ee772b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-469000] minikube v1.34.0 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae25bd28-4d06-4798-bc0b-1802cda2bff8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19740"}}
	{"specversion":"1.0","id":"449d4dc4-5df7-4f20-8302-52f96b8a2c5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig"}}
	{"specversion":"1.0","id":"e208da90-e775-41e2-9f61-702f4ae407ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"50838d54-ade8-497b-ad71-83e09582d737","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7c64e827-e180-4585-859f-55bdfb99fab8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube"}}
	{"specversion":"1.0","id":"2467a14a-7188-4813-be8c-51ae1c2c54db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e5b3aaad-4e49-4600-a2b0-e48348f0764c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-469000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-469000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-908000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-908000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (101.722875ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-908000] minikube v1.34.0 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19740
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19740-1141/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19740-1141/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-908000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-908000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.712375ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-908000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.695441791s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.607036042s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-908000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-908000: (3.522956584s)
--- PASS: TestNoKubernetes/serial/Stop (3.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-908000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-908000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.802792ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-908000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-342000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-663000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-663000 --alsologtostderr -v=3: (3.292200458s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-708000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-708000 --alsologtostderr -v=3: (3.655126541s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-663000 -n old-k8s-version-663000: exit status 7 (56.909167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-663000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-708000 -n no-preload-708000: exit status 7 (58.7845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-708000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-591000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-591000 --alsologtostderr -v=3: (3.59806325s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-311000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-311000 --alsologtostderr -v=3: (3.505164083s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-591000 -n embed-certs-591000: exit status 7 (54.4555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-591000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-311000 -n default-k8s-diff-port-311000: exit status 7 (56.529458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-311000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-584000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-584000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-584000 --alsologtostderr -v=3: (2.866452416s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-584000 -n newest-cni-584000: exit status 7 (53.36275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-584000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (20/273)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-870000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-870000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-870000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-870000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-870000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-870000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-870000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-870000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-870000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-870000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-870000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-870000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-870000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-870000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-870000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-870000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-870000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-870000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-870000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-870000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-870000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-870000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-870000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-870000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-870000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-870000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-870000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-870000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-870000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-870000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-870000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-870000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-870000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-870000"

                                                
                                                
----------------------- debugLogs end: cilium-870000 [took: 2.325814167s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-870000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-870000
--- SKIP: TestNetworkPlugins/group/cilium (2.43s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-458000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-458000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard