Test Report: QEMU_macOS 20062

                    
                      964562641276d457941dbb6d7cf4aa7e43312d02:2024-12-09:37415
                    
                

Test fail (99/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 23.39
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.1
48 TestCertOptions 12.29
49 TestCertExpiration 197.48
50 TestDockerFlags 12.33
51 TestForceSystemdFlag 11
52 TestForceSystemdEnv 10.16
97 TestFunctional/parallel/ServiceCmdConnect 40.72
162 TestMultiControlPlane/serial/StartCluster 725.38
163 TestMultiControlPlane/serial/DeployApp 110.2
164 TestMultiControlPlane/serial/PingHostFromPods 0.1
165 TestMultiControlPlane/serial/AddWorkerNode 0.09
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
169 TestMultiControlPlane/serial/StopSecondaryNode 0.12
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
171 TestMultiControlPlane/serial/RestartSecondaryNode 0.16
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.1
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 963.63
184 TestJSONOutput/start/Command 725.27
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.09
196 TestJSONOutput/unpause/Command 0.06
213 TestMinikubeProfile 190.9
216 TestMountStart/serial/StartWithMountFirst 10.06
219 TestMultiNode/serial/FreshStart2Nodes 9.95
220 TestMultiNode/serial/DeployApp2Nodes 78.46
221 TestMultiNode/serial/PingHostFrom2Pods 0.1
222 TestMultiNode/serial/AddNode 0.08
223 TestMultiNode/serial/MultiNodeLabels 0.13
224 TestMultiNode/serial/ProfileList 0.09
225 TestMultiNode/serial/CopyFile 0.07
226 TestMultiNode/serial/StopNode 0.16
227 TestMultiNode/serial/StartAfterStop 48.59
228 TestMultiNode/serial/RestartKeepsNodes 8.76
229 TestMultiNode/serial/DeleteNode 0.12
230 TestMultiNode/serial/StopMultiNode 3.45
231 TestMultiNode/serial/RestartMultiNode 5.26
232 TestMultiNode/serial/ValidateNameConflict 20.13
236 TestPreload 10.03
238 TestScheduledStopUnix 10.11
239 TestSkaffold 12.69
242 TestRunningBinaryUpgrade 605.3
244 TestKubernetesUpgrade 18.8
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.39
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.02
260 TestStoppedBinaryUpgrade/Upgrade 575.81
262 TestPause/serial/Start 10.04
272 TestNoKubernetes/serial/StartWithK8s 10.05
273 TestNoKubernetes/serial/StartWithStopK8s 5.33
274 TestNoKubernetes/serial/Start 5.33
278 TestNoKubernetes/serial/StartNoArgs 5.29
280 TestNetworkPlugins/group/auto/Start 9.95
281 TestNetworkPlugins/group/flannel/Start 9.92
282 TestNetworkPlugins/group/kindnet/Start 10.13
283 TestNetworkPlugins/group/enable-default-cni/Start 10.04
284 TestNetworkPlugins/group/bridge/Start 9.86
285 TestNetworkPlugins/group/kubenet/Start 10.09
286 TestNetworkPlugins/group/custom-flannel/Start 9.87
287 TestNetworkPlugins/group/calico/Start 9.8
288 TestNetworkPlugins/group/false/Start 9.89
291 TestStartStop/group/old-k8s-version/serial/FirstStart 10.02
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
300 TestStartStop/group/old-k8s-version/serial/Pause 0.12
302 TestStartStop/group/no-preload/serial/FirstStart 9.98
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.07
305 TestStartStop/group/no-preload/serial/DeployApp 0.1
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
309 TestStartStop/group/no-preload/serial/SecondStart 5.69
310 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
314 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.29
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
318 TestStartStop/group/no-preload/serial/Pause 0.11
320 TestStartStop/group/newest-cni/serial/FirstStart 9.97
321 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
322 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
323 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
324 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
326 TestStartStop/group/embed-certs/serial/FirstStart 10.02
331 TestStartStop/group/newest-cni/serial/SecondStart 6.21
332 TestStartStop/group/embed-certs/serial/DeployApp 0.1
333 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
336 TestStartStop/group/embed-certs/serial/SecondStart 5.27
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
340 TestStartStop/group/newest-cni/serial/Pause 0.11
341 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
342 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
343 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
344 TestStartStop/group/embed-certs/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (23.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-632000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-632000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (23.386639459s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"559eb033-b96d-4307-9db1-f1141d42d2dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-632000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e9486ec-e080-465b-9f52-ba596c405390","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20062"}}
	{"specversion":"1.0","id":"64aa5cd4-c44b-4b53-b507-8b3f0c62fd4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig"}}
	{"specversion":"1.0","id":"1bbdc8c0-4f53-4c40-b075-a5a2a4723c70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"2ae1e560-c899-40a6-a210-1588e9d6b8c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3bcbf178-e5f4-44fc-a072-827c5c61f4cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube"}}
	{"specversion":"1.0","id":"bbce2b33-5343-48d6-968d-25fccd8db8a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"a75ad0d8-244c-4d18-b491-e95097dd2aad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"613e56cb-4815-4364-bfce-8ebcdeb643d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"9e4753b0-5a90-4a7e-ae61-65dc86aa3eef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b328dc00-35b0-4761-b442-505c27fad10a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-632000\" primary control-plane node in \"download-only-632000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"38ed4f94-d71b-4b6b-ba7e-2d1754ecd956","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e72cf207-b74e-4dcd-96f6-34d4488229e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107f78320 0x107f78320 0x107f78320 0x107f78320 0x107f78320 0x107f78320 0x107f78320] Decompressors:map[bz2:0x14000697d70 gz:0x14000697d78 tar:0x14000697c50 tar.bz2:0x14000697cb0 tar.gz:0x14000697d00 tar.xz:0x14000697d30 tar.zst:0x14000697d60 tbz2:0x14000697cb0 tgz:0x14
000697d00 txz:0x14000697d30 tzst:0x14000697d60 xz:0x14000697d90 zip:0x14000697da0 zst:0x14000697d98] Getters:map[file:0x1400198c560 http:0x140008ec6e0 https:0x140008ec730] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"447ec6c0-f419-43d4-9c4d-52261523b267","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 15:42:47.863790    1743 out.go:345] Setting OutFile to fd 1 ...
	I1209 15:42:47.863972    1743 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 15:42:47.863976    1743 out.go:358] Setting ErrFile to fd 2...
	I1209 15:42:47.863978    1743 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 15:42:47.864116    1743 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	W1209 15:42:47.864209    1743 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20062-1231/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20062-1231/.minikube/config/config.json: no such file or directory
	I1209 15:42:47.865671    1743 out.go:352] Setting JSON to true
	I1209 15:42:47.884799    1743 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":737,"bootTime":1733787030,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 15:42:47.884913    1743 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 15:42:47.890615    1743 out.go:97] [download-only-632000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 15:42:47.890765    1743 notify.go:220] Checking for updates...
	W1209 15:42:47.890833    1743 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 15:42:47.894532    1743 out.go:169] MINIKUBE_LOCATION=20062
	I1209 15:42:47.897567    1743 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 15:42:47.902571    1743 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 15:42:47.906589    1743 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 15:42:47.908002    1743 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	W1209 15:42:47.913645    1743 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 15:42:47.913900    1743 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 15:42:47.917605    1743 out.go:97] Using the qemu2 driver based on user configuration
	I1209 15:42:47.917627    1743 start.go:297] selected driver: qemu2
	I1209 15:42:47.917643    1743 start.go:901] validating driver "qemu2" against <nil>
	I1209 15:42:47.917734    1743 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 15:42:47.921627    1743 out.go:169] Automatically selected the socket_vmnet network
	I1209 15:42:47.928592    1743 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1209 15:42:47.928688    1743 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 15:42:47.928728    1743 cni.go:84] Creating CNI manager for ""
	I1209 15:42:47.928763    1743 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1209 15:42:47.928831    1743 start.go:340] cluster config:
	{Name:download-only-632000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 15:42:47.933569    1743 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 15:42:47.937467    1743 out.go:97] Downloading VM boot image ...
	I1209 15:42:47.937480    1743 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso
	I1209 15:42:58.466568    1743 out.go:97] Starting "download-only-632000" primary control-plane node in "download-only-632000" cluster
	I1209 15:42:58.466600    1743 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 15:42:58.521837    1743 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1209 15:42:58.521856    1743 cache.go:56] Caching tarball of preloaded images
	I1209 15:42:58.522031    1743 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 15:42:58.528151    1743 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1209 15:42:58.528158    1743 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1209 15:42:58.613237    1743 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1209 15:43:09.938403    1743 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1209 15:43:09.938577    1743 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1209 15:43:10.633178    1743 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1209 15:43:10.633380    1743 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/download-only-632000/config.json ...
	I1209 15:43:10.633397    1743 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/download-only-632000/config.json: {Name:mk306deaa9e300654af025aebb243664b8b97ba0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 15:43:10.633659    1743 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 15:43:10.633900    1743 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1209 15:43:11.168231    1743 out.go:193] 
	W1209 15:43:11.174233    1743 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107f78320 0x107f78320 0x107f78320 0x107f78320 0x107f78320 0x107f78320 0x107f78320] Decompressors:map[bz2:0x14000697d70 gz:0x14000697d78 tar:0x14000697c50 tar.bz2:0x14000697cb0 tar.gz:0x14000697d00 tar.xz:0x14000697d30 tar.zst:0x14000697d60 tbz2:0x14000697cb0 tgz:0x14000697d00 txz:0x14000697d30 tzst:0x14000697d60 xz:0x14000697d90 zip:0x14000697da0 zst:0x14000697d98] Getters:map[file:0x1400198c560 http:0x140008ec6e0 https:0x140008ec730] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1209 15:43:11.174256    1743 out_reason.go:110] 
	W1209 15:43:11.183170    1743 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 15:43:11.187058    1743 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-632000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (23.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.1s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-011000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-011000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.932623333s)

                                                
                                                
-- stdout --
	* [offline-docker-011000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-011000" primary control-plane node in "offline-docker-011000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-011000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:44:05.603661    4779 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:44:05.603840    4779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:44:05.603845    4779 out.go:358] Setting ErrFile to fd 2...
	I1209 16:44:05.603848    4779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:44:05.603969    4779 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:44:05.605352    4779 out.go:352] Setting JSON to false
	I1209 16:44:05.624909    4779 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4415,"bootTime":1733787030,"procs":537,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:44:05.624984    4779 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:44:05.630101    4779 out.go:177] * [offline-docker-011000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:44:05.638138    4779 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:44:05.638138    4779 notify.go:220] Checking for updates...
	I1209 16:44:05.645064    4779 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:44:05.648077    4779 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:44:05.651065    4779 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:44:05.654061    4779 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:44:05.657078    4779 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:44:05.658791    4779 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:44:05.658851    4779 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:44:05.663007    4779 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:44:05.669868    4779 start.go:297] selected driver: qemu2
	I1209 16:44:05.669879    4779 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:44:05.669887    4779 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:44:05.672070    4779 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:44:05.675040    4779 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:44:05.679086    4779 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:44:05.679105    4779 cni.go:84] Creating CNI manager for ""
	I1209 16:44:05.679133    4779 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:44:05.679136    4779 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 16:44:05.679176    4779 start.go:340] cluster config:
	{Name:offline-docker-011000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:44:05.683752    4779 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:44:05.692042    4779 out.go:177] * Starting "offline-docker-011000" primary control-plane node in "offline-docker-011000" cluster
	I1209 16:44:05.696072    4779 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:44:05.696100    4779 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:44:05.696111    4779 cache.go:56] Caching tarball of preloaded images
	I1209 16:44:05.696200    4779 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:44:05.696206    4779 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:44:05.696282    4779 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/offline-docker-011000/config.json ...
	I1209 16:44:05.696292    4779 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/offline-docker-011000/config.json: {Name:mkff5cb5c9516b9a67f3b8d6376dc98e7e454492 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:44:05.696807    4779 start.go:360] acquireMachinesLock for offline-docker-011000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:44:05.696854    4779 start.go:364] duration metric: took 37.041µs to acquireMachinesLock for "offline-docker-011000"
	I1209 16:44:05.696866    4779 start.go:93] Provisioning new machine with config: &{Name:offline-docker-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:44:05.696890    4779 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:44:05.700092    4779 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1209 16:44:05.715490    4779 start.go:159] libmachine.API.Create for "offline-docker-011000" (driver="qemu2")
	I1209 16:44:05.715526    4779 client.go:168] LocalClient.Create starting
	I1209 16:44:05.715604    4779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:44:05.715643    4779 main.go:141] libmachine: Decoding PEM data...
	I1209 16:44:05.715655    4779 main.go:141] libmachine: Parsing certificate...
	I1209 16:44:05.715704    4779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:44:05.715734    4779 main.go:141] libmachine: Decoding PEM data...
	I1209 16:44:05.715741    4779 main.go:141] libmachine: Parsing certificate...
	I1209 16:44:05.716214    4779 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:44:05.878337    4779 main.go:141] libmachine: Creating SSH key...
	I1209 16:44:05.965533    4779 main.go:141] libmachine: Creating Disk image...
	I1209 16:44:05.965547    4779 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:44:05.965802    4779 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/disk.qcow2
	I1209 16:44:05.985519    4779 main.go:141] libmachine: STDOUT: 
	I1209 16:44:05.985537    4779 main.go:141] libmachine: STDERR: 
	I1209 16:44:05.985616    4779 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/disk.qcow2 +20000M
	I1209 16:44:05.995154    4779 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:44:05.995175    4779 main.go:141] libmachine: STDERR: 
	I1209 16:44:05.995198    4779 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/disk.qcow2
	I1209 16:44:05.995203    4779 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:44:05.995219    4779 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:44:05.995249    4779 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:16:2b:1b:56:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/disk.qcow2
	I1209 16:44:05.997353    4779 main.go:141] libmachine: STDOUT: 
	I1209 16:44:05.997413    4779 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:44:05.997433    4779 client.go:171] duration metric: took 281.90275ms to LocalClient.Create
	I1209 16:44:07.999363    4779 start.go:128] duration metric: took 2.302474333s to createHost
	I1209 16:44:07.999382    4779 start.go:83] releasing machines lock for "offline-docker-011000", held for 2.302531958s
	W1209 16:44:07.999393    4779 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:44:08.006385    4779 out.go:177] * Deleting "offline-docker-011000" in qemu2 ...
	W1209 16:44:08.017062    4779 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:44:08.017072    4779 start.go:729] Will try again in 5 seconds ...
	I1209 16:44:13.019375    4779 start.go:360] acquireMachinesLock for offline-docker-011000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:44:13.019894    4779 start.go:364] duration metric: took 396.333µs to acquireMachinesLock for "offline-docker-011000"
	I1209 16:44:13.020038    4779 start.go:93] Provisioning new machine with config: &{Name:offline-docker-011000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-011000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:44:13.020336    4779 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:44:13.031951    4779 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1209 16:44:13.081942    4779 start.go:159] libmachine.API.Create for "offline-docker-011000" (driver="qemu2")
	I1209 16:44:13.081991    4779 client.go:168] LocalClient.Create starting
	I1209 16:44:13.082131    4779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:44:13.082214    4779 main.go:141] libmachine: Decoding PEM data...
	I1209 16:44:13.082232    4779 main.go:141] libmachine: Parsing certificate...
	I1209 16:44:13.082303    4779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:44:13.082361    4779 main.go:141] libmachine: Decoding PEM data...
	I1209 16:44:13.082374    4779 main.go:141] libmachine: Parsing certificate...
	I1209 16:44:13.083122    4779 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:44:13.258404    4779 main.go:141] libmachine: Creating SSH key...
	I1209 16:44:13.422736    4779 main.go:141] libmachine: Creating Disk image...
	I1209 16:44:13.422743    4779 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:44:13.422989    4779 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/disk.qcow2
	I1209 16:44:13.433574    4779 main.go:141] libmachine: STDOUT: 
	I1209 16:44:13.433600    4779 main.go:141] libmachine: STDERR: 
	I1209 16:44:13.433656    4779 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/disk.qcow2 +20000M
	I1209 16:44:13.442245    4779 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:44:13.442258    4779 main.go:141] libmachine: STDERR: 
	I1209 16:44:13.442271    4779 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/disk.qcow2
	I1209 16:44:13.442276    4779 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:44:13.442284    4779 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:44:13.442317    4779 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:b7:c1:21:fa:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/offline-docker-011000/disk.qcow2
	I1209 16:44:13.444185    4779 main.go:141] libmachine: STDOUT: 
	I1209 16:44:13.444203    4779 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:44:13.444218    4779 client.go:171] duration metric: took 362.223792ms to LocalClient.Create
	I1209 16:44:15.446424    4779 start.go:128] duration metric: took 2.426053959s to createHost
	I1209 16:44:15.446501    4779 start.go:83] releasing machines lock for "offline-docker-011000", held for 2.426591917s
	W1209 16:44:15.446920    4779 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-011000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-011000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:44:15.465623    4779 out.go:201] 
	W1209 16:44:15.468706    4779 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:44:15.468768    4779 out.go:270] * 
	* 
	W1209 16:44:15.471412    4779 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:44:15.488718    4779 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-011000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-12-09 16:44:15.503483 -0800 PST m=+3687.696555251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-011000 -n offline-docker-011000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-011000 -n offline-docker-011000: exit status 7 (72.90975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-011000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-011000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-011000
--- FAIL: TestOffline (10.10s)

                                                
                                    
x
+
TestCertOptions (12.29s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-274000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-274000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (12.009232708s)

                                                
                                                
-- stdout --
	* [cert-options-274000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-274000" primary control-plane node in "cert-options-274000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-274000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-274000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-274000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-274000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-274000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.528208ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-274000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-274000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-274000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-274000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-274000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-274000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (44.763666ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-274000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-274000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-274000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-274000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-274000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-12-09 16:44:50.324712 -0800 PST m=+3722.517913043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-274000 -n cert-options-274000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-274000 -n cert-options-274000: exit status 7 (35.058583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-274000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-274000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-274000
--- FAIL: TestCertOptions (12.29s)

                                                
                                    
x
+
TestCertExpiration (197.48s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-966000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-966000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.09253s)

                                                
                                                
-- stdout --
	* [cert-expiration-966000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-966000" primary control-plane node in "cert-expiration-966000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-966000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-966000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-966000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-966000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-966000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.226036s)

                                                
                                                
-- stdout --
	* [cert-expiration-966000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-966000" primary control-plane node in "cert-expiration-966000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-966000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-966000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-966000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-966000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-966000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-966000" primary control-plane node in "cert-expiration-966000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-966000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-966000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-966000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-12-09 16:47:52.997158 -0800 PST m=+3905.191039334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-966000 -n cert-expiration-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-966000 -n cert-expiration-966000: exit status 7 (73.508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-966000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-966000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-966000
--- FAIL: TestCertExpiration (197.48s)

                                                
                                    
x
+
TestDockerFlags (12.33s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-549000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-549000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.04254425s)

                                                
                                                
-- stdout --
	* [docker-flags-549000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-549000" primary control-plane node in "docker-flags-549000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-549000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:44:25.857815    4978 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:44:25.857964    4978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:44:25.857969    4978 out.go:358] Setting ErrFile to fd 2...
	I1209 16:44:25.857972    4978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:44:25.858134    4978 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:44:25.859304    4978 out.go:352] Setting JSON to false
	I1209 16:44:25.877635    4978 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4435,"bootTime":1733787030,"procs":541,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:44:25.877707    4978 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:44:25.885879    4978 out.go:177] * [docker-flags-549000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:44:25.895759    4978 notify.go:220] Checking for updates...
	I1209 16:44:25.901825    4978 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:44:25.906823    4978 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:44:25.913921    4978 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:44:25.921850    4978 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:44:25.925689    4978 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:44:25.934745    4978 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:44:25.939207    4978 config.go:182] Loaded profile config "force-systemd-flag-795000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:44:25.939274    4978 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:44:25.939317    4978 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:44:25.942829    4978 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:44:25.949838    4978 start.go:297] selected driver: qemu2
	I1209 16:44:25.949844    4978 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:44:25.949851    4978 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:44:25.952072    4978 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:44:25.955855    4978 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:44:25.958927    4978 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1209 16:44:25.958950    4978 cni.go:84] Creating CNI manager for ""
	I1209 16:44:25.958973    4978 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:44:25.958981    4978 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 16:44:25.959018    4978 start.go:340] cluster config:
	{Name:docker-flags-549000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:44:25.963319    4978 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:44:25.970832    4978 out.go:177] * Starting "docker-flags-549000" primary control-plane node in "docker-flags-549000" cluster
	I1209 16:44:25.974827    4978 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:44:25.974840    4978 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:44:25.974851    4978 cache.go:56] Caching tarball of preloaded images
	I1209 16:44:25.974923    4978 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:44:25.974929    4978 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:44:25.975002    4978 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/docker-flags-549000/config.json ...
	I1209 16:44:25.975012    4978 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/docker-flags-549000/config.json: {Name:mk8d9b981ebb47e9875bafe596184f26777b2bcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:44:25.975493    4978 start.go:360] acquireMachinesLock for docker-flags-549000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:44:27.940138    4978 start.go:364] duration metric: took 1.964627584s to acquireMachinesLock for "docker-flags-549000"
	I1209 16:44:27.940236    4978 start.go:93] Provisioning new machine with config: &{Name:docker-flags-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:44:27.940372    4978 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:44:27.948860    4978 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1209 16:44:28.000777    4978 start.go:159] libmachine.API.Create for "docker-flags-549000" (driver="qemu2")
	I1209 16:44:28.000819    4978 client.go:168] LocalClient.Create starting
	I1209 16:44:28.000971    4978 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:44:28.001047    4978 main.go:141] libmachine: Decoding PEM data...
	I1209 16:44:28.001068    4978 main.go:141] libmachine: Parsing certificate...
	I1209 16:44:28.001136    4978 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:44:28.001194    4978 main.go:141] libmachine: Decoding PEM data...
	I1209 16:44:28.001208    4978 main.go:141] libmachine: Parsing certificate...
	I1209 16:44:28.001967    4978 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:44:28.173704    4978 main.go:141] libmachine: Creating SSH key...
	I1209 16:44:28.250352    4978 main.go:141] libmachine: Creating Disk image...
	I1209 16:44:28.250361    4978 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:44:28.250597    4978 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/disk.qcow2
	I1209 16:44:28.260581    4978 main.go:141] libmachine: STDOUT: 
	I1209 16:44:28.260604    4978 main.go:141] libmachine: STDERR: 
	I1209 16:44:28.260660    4978 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/disk.qcow2 +20000M
	I1209 16:44:28.269019    4978 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:44:28.269035    4978 main.go:141] libmachine: STDERR: 
	I1209 16:44:28.269053    4978 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/disk.qcow2
	I1209 16:44:28.269057    4978 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:44:28.269071    4978 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:44:28.269135    4978 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:72:67:ea:31:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/disk.qcow2
	I1209 16:44:28.270911    4978 main.go:141] libmachine: STDOUT: 
	I1209 16:44:28.270923    4978 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:44:28.270943    4978 client.go:171] duration metric: took 270.115917ms to LocalClient.Create
	I1209 16:44:30.273113    4978 start.go:128] duration metric: took 2.332724209s to createHost
	I1209 16:44:30.273181    4978 start.go:83] releasing machines lock for "docker-flags-549000", held for 2.333027541s
	W1209 16:44:30.273265    4978 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:44:30.280528    4978 out.go:177] * Deleting "docker-flags-549000" in qemu2 ...
	W1209 16:44:30.321616    4978 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:44:30.321647    4978 start.go:729] Will try again in 5 seconds ...
	I1209 16:44:35.323898    4978 start.go:360] acquireMachinesLock for docker-flags-549000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:44:35.404031    4978 start.go:364] duration metric: took 80.006583ms to acquireMachinesLock for "docker-flags-549000"
	I1209 16:44:35.404206    4978 start.go:93] Provisioning new machine with config: &{Name:docker-flags-549000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-549000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:44:35.404475    4978 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:44:35.414854    4978 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1209 16:44:35.464843    4978 start.go:159] libmachine.API.Create for "docker-flags-549000" (driver="qemu2")
	I1209 16:44:35.464896    4978 client.go:168] LocalClient.Create starting
	I1209 16:44:35.465033    4978 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:44:35.465086    4978 main.go:141] libmachine: Decoding PEM data...
	I1209 16:44:35.465102    4978 main.go:141] libmachine: Parsing certificate...
	I1209 16:44:35.465177    4978 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:44:35.465209    4978 main.go:141] libmachine: Decoding PEM data...
	I1209 16:44:35.465223    4978 main.go:141] libmachine: Parsing certificate...
	I1209 16:44:35.465806    4978 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:44:35.701732    4978 main.go:141] libmachine: Creating SSH key...
	I1209 16:44:35.803556    4978 main.go:141] libmachine: Creating Disk image...
	I1209 16:44:35.803564    4978 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:44:35.803746    4978 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/disk.qcow2
	I1209 16:44:35.813531    4978 main.go:141] libmachine: STDOUT: 
	I1209 16:44:35.813554    4978 main.go:141] libmachine: STDERR: 
	I1209 16:44:35.813614    4978 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/disk.qcow2 +20000M
	I1209 16:44:35.822024    4978 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:44:35.822040    4978 main.go:141] libmachine: STDERR: 
	I1209 16:44:35.822061    4978 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/disk.qcow2
	I1209 16:44:35.822067    4978 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:44:35.822081    4978 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:44:35.822109    4978 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:26:1b:11:8c:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/docker-flags-549000/disk.qcow2
	I1209 16:44:35.823899    4978 main.go:141] libmachine: STDOUT: 
	I1209 16:44:35.823913    4978 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:44:35.823926    4978 client.go:171] duration metric: took 359.026834ms to LocalClient.Create
	I1209 16:44:37.826222    4978 start.go:128] duration metric: took 2.42170975s to createHost
	I1209 16:44:37.826335    4978 start.go:83] releasing machines lock for "docker-flags-549000", held for 2.422285833s
	W1209 16:44:37.826689    4978 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-549000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-549000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:44:37.839232    4978 out.go:201] 
	W1209 16:44:37.843519    4978 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:44:37.843553    4978 out.go:270] * 
	* 
	W1209 16:44:37.846096    4978 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:44:37.854285    4978 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-549000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-549000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-549000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (80.987375ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-549000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-549000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-549000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-549000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-549000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-549000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-549000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-549000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-549000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (60.838792ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-549000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-549000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-549000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-549000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-549000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-549000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-12-09 16:44:38.008544 -0800 PST m=+3710.201699876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-549000 -n docker-flags-549000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-549000 -n docker-flags-549000: exit status 7 (39.745583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-549000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-549000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-549000
--- FAIL: TestDockerFlags (12.33s)

                                                
                                    
x
+
TestForceSystemdFlag (11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-795000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-795000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.775098708s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-795000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-795000" primary control-plane node in "force-systemd-flag-795000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-795000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:44:24.703596    4964 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:44:24.703774    4964 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:44:24.703777    4964 out.go:358] Setting ErrFile to fd 2...
	I1209 16:44:24.703780    4964 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:44:24.703908    4964 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:44:24.705090    4964 out.go:352] Setting JSON to false
	I1209 16:44:24.722541    4964 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4434,"bootTime":1733787030,"procs":541,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:44:24.722609    4964 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:44:24.734110    4964 out.go:177] * [force-systemd-flag-795000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:44:24.744057    4964 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:44:24.744079    4964 notify.go:220] Checking for updates...
	I1209 16:44:24.756986    4964 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:44:24.760008    4964 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:44:24.764964    4964 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:44:24.768016    4964 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:44:24.770970    4964 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:44:24.774341    4964 config.go:182] Loaded profile config "force-systemd-env-355000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:44:24.774425    4964 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:44:24.774476    4964 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:44:24.779055    4964 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:44:24.786002    4964 start.go:297] selected driver: qemu2
	I1209 16:44:24.786009    4964 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:44:24.786016    4964 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:44:24.788697    4964 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:44:24.792061    4964 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:44:24.793614    4964 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 16:44:24.793638    4964 cni.go:84] Creating CNI manager for ""
	I1209 16:44:24.793661    4964 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:44:24.793670    4964 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 16:44:24.793700    4964 start.go:340] cluster config:
	{Name:force-systemd-flag-795000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-795000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:44:24.799095    4964 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:44:24.808036    4964 out.go:177] * Starting "force-systemd-flag-795000" primary control-plane node in "force-systemd-flag-795000" cluster
	I1209 16:44:24.811975    4964 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:44:24.811994    4964 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:44:24.812011    4964 cache.go:56] Caching tarball of preloaded images
	I1209 16:44:24.812106    4964 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:44:24.812113    4964 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:44:24.812200    4964 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/force-systemd-flag-795000/config.json ...
	I1209 16:44:24.812212    4964 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/force-systemd-flag-795000/config.json: {Name:mk248072c7ce335185704cfc94b0073c275bcf06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:44:24.812804    4964 start.go:360] acquireMachinesLock for force-systemd-flag-795000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:44:25.555931    4964 start.go:364] duration metric: took 743.091334ms to acquireMachinesLock for "force-systemd-flag-795000"
	I1209 16:44:25.556137    4964 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-795000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-795000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:44:25.556354    4964 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:44:25.565860    4964 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1209 16:44:25.613187    4964 start.go:159] libmachine.API.Create for "force-systemd-flag-795000" (driver="qemu2")
	I1209 16:44:25.613244    4964 client.go:168] LocalClient.Create starting
	I1209 16:44:25.613380    4964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:44:25.613455    4964 main.go:141] libmachine: Decoding PEM data...
	I1209 16:44:25.613479    4964 main.go:141] libmachine: Parsing certificate...
	I1209 16:44:25.613552    4964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:44:25.613612    4964 main.go:141] libmachine: Decoding PEM data...
	I1209 16:44:25.613628    4964 main.go:141] libmachine: Parsing certificate...
	I1209 16:44:25.614293    4964 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:44:25.853531    4964 main.go:141] libmachine: Creating SSH key...
	I1209 16:44:25.898423    4964 main.go:141] libmachine: Creating Disk image...
	I1209 16:44:25.898430    4964 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:44:25.898623    4964 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/disk.qcow2
	I1209 16:44:25.918653    4964 main.go:141] libmachine: STDOUT: 
	I1209 16:44:25.918680    4964 main.go:141] libmachine: STDERR: 
	I1209 16:44:25.918739    4964 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/disk.qcow2 +20000M
	I1209 16:44:25.935723    4964 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:44:25.935751    4964 main.go:141] libmachine: STDERR: 
	I1209 16:44:25.935773    4964 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/disk.qcow2
	I1209 16:44:25.935778    4964 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:44:25.935786    4964 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:44:25.935819    4964 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:11:bb:c9:44:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/disk.qcow2
	I1209 16:44:25.937749    4964 main.go:141] libmachine: STDOUT: 
	I1209 16:44:25.937764    4964 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:44:25.937784    4964 client.go:171] duration metric: took 324.533166ms to LocalClient.Create
	I1209 16:44:27.939945    4964 start.go:128] duration metric: took 2.383568042s to createHost
	I1209 16:44:27.939998    4964 start.go:83] releasing machines lock for "force-systemd-flag-795000", held for 2.384013334s
	W1209 16:44:27.940057    4964 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:44:27.957781    4964 out.go:177] * Deleting "force-systemd-flag-795000" in qemu2 ...
	W1209 16:44:27.983224    4964 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:44:27.983248    4964 start.go:729] Will try again in 5 seconds ...
	I1209 16:44:32.985518    4964 start.go:360] acquireMachinesLock for force-systemd-flag-795000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:44:32.986050    4964 start.go:364] duration metric: took 434.083µs to acquireMachinesLock for "force-systemd-flag-795000"
	I1209 16:44:32.986215    4964 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-795000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-795000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:44:32.986538    4964 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:44:33.009368    4964 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1209 16:44:33.058103    4964 start.go:159] libmachine.API.Create for "force-systemd-flag-795000" (driver="qemu2")
	I1209 16:44:33.058167    4964 client.go:168] LocalClient.Create starting
	I1209 16:44:33.058313    4964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:44:33.058395    4964 main.go:141] libmachine: Decoding PEM data...
	I1209 16:44:33.058415    4964 main.go:141] libmachine: Parsing certificate...
	I1209 16:44:33.058476    4964 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:44:33.058532    4964 main.go:141] libmachine: Decoding PEM data...
	I1209 16:44:33.058544    4964 main.go:141] libmachine: Parsing certificate...
	I1209 16:44:33.059255    4964 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:44:33.230550    4964 main.go:141] libmachine: Creating SSH key...
	I1209 16:44:33.380347    4964 main.go:141] libmachine: Creating Disk image...
	I1209 16:44:33.380355    4964 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:44:33.380630    4964 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/disk.qcow2
	I1209 16:44:33.391023    4964 main.go:141] libmachine: STDOUT: 
	I1209 16:44:33.391041    4964 main.go:141] libmachine: STDERR: 
	I1209 16:44:33.391097    4964 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/disk.qcow2 +20000M
	I1209 16:44:33.399599    4964 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:44:33.399623    4964 main.go:141] libmachine: STDERR: 
	I1209 16:44:33.399633    4964 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/disk.qcow2
	I1209 16:44:33.399640    4964 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:44:33.399648    4964 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:44:33.399685    4964 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:9a:b8:8a:6e:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-flag-795000/disk.qcow2
	I1209 16:44:33.401486    4964 main.go:141] libmachine: STDOUT: 
	I1209 16:44:33.401500    4964 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:44:33.401512    4964 client.go:171] duration metric: took 343.341583ms to LocalClient.Create
	I1209 16:44:35.403802    4964 start.go:128] duration metric: took 2.417219542s to createHost
	I1209 16:44:35.403876    4964 start.go:83] releasing machines lock for "force-systemd-flag-795000", held for 2.417812125s
	W1209 16:44:35.404248    4964 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-795000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-795000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:44:35.419887    4964 out.go:201] 
	W1209 16:44:35.423890    4964 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:44:35.423927    4964 out.go:270] * 
	* 
	W1209 16:44:35.426677    4964 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:44:35.434826    4964 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-795000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-795000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-795000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.874ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-795000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-795000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-795000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-12-09 16:44:35.528145 -0800 PST m=+3707.721291668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-795000 -n force-systemd-flag-795000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-795000 -n force-systemd-flag-795000: exit status 7 (40.542667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-795000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-795000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-795000
--- FAIL: TestForceSystemdFlag (11.00s)

                                                
                                    
x
+
TestForceSystemdEnv (10.16s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-355000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1209 16:44:19.448766    1742 install.go:79] stdout: 
W1209 16:44:19.449000    1742 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1209 16:44:19.449026    1742 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/001/docker-machine-driver-hyperkit]
I1209 16:44:19.466032    1742 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/001/docker-machine-driver-hyperkit]
I1209 16:44:19.478602    1742 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/001/docker-machine-driver-hyperkit]
I1209 16:44:19.489755    1742 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/001/docker-machine-driver-hyperkit]
I1209 16:44:19.511015    1742 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 16:44:19.511132    1742 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1209 16:44:21.318326    1742 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1209 16:44:21.318351    1742 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1209 16:44:21.318408    1742 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1209 16:44:21.318450    1742 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/002/docker-machine-driver-hyperkit
I1209 16:44:21.719758    1742 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1050a16e0 0x1050a16e0 0x1050a16e0 0x1050a16e0 0x1050a16e0 0x1050a16e0 0x1050a16e0] Decompressors:map[bz2:0x14000610160 gz:0x14000610168 tar:0x14000610110 tar.bz2:0x14000610120 tar.gz:0x14000610130 tar.xz:0x14000610140 tar.zst:0x14000610150 tbz2:0x14000610120 tgz:0x14000610130 txz:0x14000610140 tzst:0x14000610150 xz:0x14000610170 zip:0x14000610180 zst:0x14000610178] Getters:map[file:0x14000908220 http:0x140007f84b0 https:0x140007f8500] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1209 16:44:21.719879    1742 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/002/docker-machine-driver-hyperkit
I1209 16:44:24.618294    1742 install.go:79] stdout: 
W1209 16:44:24.618463    1742 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1209 16:44:24.618495    1742 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/002/docker-machine-driver-hyperkit]
I1209 16:44:24.635837    1742 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/002/docker-machine-driver-hyperkit]
I1209 16:44:24.649486    1742 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/002/docker-machine-driver-hyperkit]
I1209 16:44:24.660241    1742 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-355000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.925430625s)

                                                
                                                
-- stdout --
	* [force-systemd-env-355000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-355000" primary control-plane node in "force-systemd-env-355000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-355000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:44:15.700962    4920 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:44:15.701126    4920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:44:15.701130    4920 out.go:358] Setting ErrFile to fd 2...
	I1209 16:44:15.701132    4920 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:44:15.701264    4920 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:44:15.702352    4920 out.go:352] Setting JSON to false
	I1209 16:44:15.720067    4920 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4425,"bootTime":1733787030,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:44:15.720143    4920 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:44:15.725443    4920 out.go:177] * [force-systemd-env-355000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:44:15.733584    4920 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:44:15.733639    4920 notify.go:220] Checking for updates...
	I1209 16:44:15.740489    4920 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:44:15.743514    4920 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:44:15.747430    4920 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:44:15.750497    4920 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:44:15.753468    4920 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1209 16:44:15.756772    4920 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:44:15.756818    4920 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:44:15.760485    4920 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:44:15.767525    4920 start.go:297] selected driver: qemu2
	I1209 16:44:15.767532    4920 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:44:15.767541    4920 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:44:15.770092    4920 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:44:15.773502    4920 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:44:15.776616    4920 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 16:44:15.776635    4920 cni.go:84] Creating CNI manager for ""
	I1209 16:44:15.776673    4920 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:44:15.776678    4920 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 16:44:15.776722    4920 start.go:340] cluster config:
	{Name:force-systemd-env-355000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-355000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:44:15.781457    4920 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:44:15.789500    4920 out.go:177] * Starting "force-systemd-env-355000" primary control-plane node in "force-systemd-env-355000" cluster
	I1209 16:44:15.793432    4920 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:44:15.793450    4920 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:44:15.793468    4920 cache.go:56] Caching tarball of preloaded images
	I1209 16:44:15.793545    4920 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:44:15.793551    4920 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:44:15.793609    4920 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/force-systemd-env-355000/config.json ...
	I1209 16:44:15.793621    4920 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/force-systemd-env-355000/config.json: {Name:mk77359dc687c498cd45c96fc6011a4d771cce1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:44:15.794120    4920 start.go:360] acquireMachinesLock for force-systemd-env-355000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:44:15.794183    4920 start.go:364] duration metric: took 54.625µs to acquireMachinesLock for "force-systemd-env-355000"
	I1209 16:44:15.794198    4920 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-355000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-355000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:44:15.794223    4920 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:44:15.803518    4920 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1209 16:44:15.821640    4920 start.go:159] libmachine.API.Create for "force-systemd-env-355000" (driver="qemu2")
	I1209 16:44:15.821666    4920 client.go:168] LocalClient.Create starting
	I1209 16:44:15.821756    4920 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:44:15.821797    4920 main.go:141] libmachine: Decoding PEM data...
	I1209 16:44:15.821809    4920 main.go:141] libmachine: Parsing certificate...
	I1209 16:44:15.821856    4920 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:44:15.821886    4920 main.go:141] libmachine: Decoding PEM data...
	I1209 16:44:15.821898    4920 main.go:141] libmachine: Parsing certificate...
	I1209 16:44:15.822370    4920 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:44:15.985076    4920 main.go:141] libmachine: Creating SSH key...
	I1209 16:44:16.170880    4920 main.go:141] libmachine: Creating Disk image...
	I1209 16:44:16.170886    4920 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:44:16.171139    4920 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/disk.qcow2
	I1209 16:44:16.181493    4920 main.go:141] libmachine: STDOUT: 
	I1209 16:44:16.181513    4920 main.go:141] libmachine: STDERR: 
	I1209 16:44:16.181580    4920 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/disk.qcow2 +20000M
	I1209 16:44:16.190190    4920 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:44:16.190203    4920 main.go:141] libmachine: STDERR: 
	I1209 16:44:16.190217    4920 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/disk.qcow2
	I1209 16:44:16.190223    4920 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:44:16.190234    4920 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:44:16.190259    4920 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:81:d8:a1:f5:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/disk.qcow2
	I1209 16:44:16.192114    4920 main.go:141] libmachine: STDOUT: 
	I1209 16:44:16.192129    4920 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:44:16.192149    4920 client.go:171] duration metric: took 370.478ms to LocalClient.Create
	I1209 16:44:18.194396    4920 start.go:128] duration metric: took 2.400146958s to createHost
	I1209 16:44:18.194476    4920 start.go:83] releasing machines lock for "force-systemd-env-355000", held for 2.400291208s
	W1209 16:44:18.194538    4920 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:44:18.210945    4920 out.go:177] * Deleting "force-systemd-env-355000" in qemu2 ...
	W1209 16:44:18.238859    4920 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:44:18.238887    4920 start.go:729] Will try again in 5 seconds ...
	I1209 16:44:23.241113    4920 start.go:360] acquireMachinesLock for force-systemd-env-355000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:44:23.241673    4920 start.go:364] duration metric: took 422.958µs to acquireMachinesLock for "force-systemd-env-355000"
	I1209 16:44:23.241806    4920 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-355000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-355000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:44:23.242058    4920 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:44:23.261930    4920 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1209 16:44:23.312322    4920 start.go:159] libmachine.API.Create for "force-systemd-env-355000" (driver="qemu2")
	I1209 16:44:23.312381    4920 client.go:168] LocalClient.Create starting
	I1209 16:44:23.312518    4920 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:44:23.312610    4920 main.go:141] libmachine: Decoding PEM data...
	I1209 16:44:23.312635    4920 main.go:141] libmachine: Parsing certificate...
	I1209 16:44:23.312697    4920 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:44:23.312755    4920 main.go:141] libmachine: Decoding PEM data...
	I1209 16:44:23.312767    4920 main.go:141] libmachine: Parsing certificate...
	I1209 16:44:23.313355    4920 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:44:23.484974    4920 main.go:141] libmachine: Creating SSH key...
	I1209 16:44:23.532800    4920 main.go:141] libmachine: Creating Disk image...
	I1209 16:44:23.532807    4920 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:44:23.533037    4920 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/disk.qcow2
	I1209 16:44:23.542853    4920 main.go:141] libmachine: STDOUT: 
	I1209 16:44:23.542870    4920 main.go:141] libmachine: STDERR: 
	I1209 16:44:23.542936    4920 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/disk.qcow2 +20000M
	I1209 16:44:23.551422    4920 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:44:23.551435    4920 main.go:141] libmachine: STDERR: 
	I1209 16:44:23.551449    4920 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/disk.qcow2
	I1209 16:44:23.551452    4920 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:44:23.551461    4920 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:44:23.551497    4920 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:81:a2:2c:48:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/force-systemd-env-355000/disk.qcow2
	I1209 16:44:23.553422    4920 main.go:141] libmachine: STDOUT: 
	I1209 16:44:23.553436    4920 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:44:23.553458    4920 client.go:171] duration metric: took 241.074042ms to LocalClient.Create
	I1209 16:44:25.555663    4920 start.go:128] duration metric: took 2.313575s to createHost
	I1209 16:44:25.555743    4920 start.go:83] releasing machines lock for "force-systemd-env-355000", held for 2.314056583s
	W1209 16:44:25.556152    4920 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-355000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-355000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:44:25.569926    4920 out.go:201] 
	W1209 16:44:25.574920    4920 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:44:25.574952    4920 out.go:270] * 
	* 
	W1209 16:44:25.577061    4920 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:44:25.584849    4920 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-355000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-355000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-355000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.768833ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-355000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-355000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-355000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-12-09 16:44:25.673986 -0800 PST m=+3697.867095126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-355000 -n force-systemd-env-355000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-355000 -n force-systemd-env-355000: exit status 7 (41.531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-355000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-355000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-355000
--- FAIL: TestForceSystemdEnv (10.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (40.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-121000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-121000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-t4vpt" [46dba77f-384e-4d88-a885-70f14f3b34e9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-t4vpt" [46dba77f-384e-4d88-a885-70f14f3b34e9] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.009042084s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:32081
functional_test.go:1661: error fetching http://192.168.105.4:32081: Get "http://192.168.105.4:32081": dial tcp 192.168.105.4:32081: connect: connection refused
I1209 15:53:54.636313    1742 retry.go:31] will retry after 1.0350709s: Get "http://192.168.105.4:32081": dial tcp 192.168.105.4:32081: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32081: Get "http://192.168.105.4:32081": dial tcp 192.168.105.4:32081: connect: connection refused
I1209 15:53:55.675210    1742 retry.go:31] will retry after 1.675874893s: Get "http://192.168.105.4:32081": dial tcp 192.168.105.4:32081: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32081: Get "http://192.168.105.4:32081": dial tcp 192.168.105.4:32081: connect: connection refused
I1209 15:53:57.355008    1742 retry.go:31] will retry after 2.287496135s: Get "http://192.168.105.4:32081": dial tcp 192.168.105.4:32081: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32081: Get "http://192.168.105.4:32081": dial tcp 192.168.105.4:32081: connect: connection refused
I1209 15:53:59.645461    1742 retry.go:31] will retry after 2.972387385s: Get "http://192.168.105.4:32081": dial tcp 192.168.105.4:32081: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32081: Get "http://192.168.105.4:32081": dial tcp 192.168.105.4:32081: connect: connection refused
I1209 15:54:02.621668    1742 retry.go:31] will retry after 5.264752748s: Get "http://192.168.105.4:32081": dial tcp 192.168.105.4:32081: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32081: Get "http://192.168.105.4:32081": dial tcp 192.168.105.4:32081: connect: connection refused
I1209 15:54:07.889543    1742 retry.go:31] will retry after 5.104167465s: Get "http://192.168.105.4:32081": dial tcp 192.168.105.4:32081: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32081: Get "http://192.168.105.4:32081": dial tcp 192.168.105.4:32081: connect: connection refused
I1209 15:54:12.994453    1742 retry.go:31] will retry after 8.838538719s: Get "http://192.168.105.4:32081": dial tcp 192.168.105.4:32081: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32081: Get "http://192.168.105.4:32081": dial tcp 192.168.105.4:32081: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:32081: Get "http://192.168.105.4:32081": dial tcp 192.168.105.4:32081: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-121000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-t4vpt
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-121000/192.168.105.4
Start Time:       Mon, 09 Dec 2024 15:53:42 -0800
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://594bb522c780bb44eaf8d643b5a0b5bb2c1c255ced1eb928d48ede5359f04a93
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 09 Dec 2024 15:54:06 -0800
Finished:     Mon, 09 Dec 2024 15:54:06 -0800
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-892pv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-892pv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  39s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-t4vpt to functional-121000
Normal   Pulling    38s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     33s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.708s (5.198s including waiting). Image size: 84957542 bytes.
Normal   Created    15s (x3 over 33s)  kubelet            Created container echoserver-arm
Normal   Started    15s (x3 over 33s)  kubelet            Started container echoserver-arm
Normal   Pulled     15s (x2 over 32s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    2s (x4 over 31s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-t4vpt_default(46dba77f-384e-4d88-a885-70f14f3b34e9)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-121000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-121000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.233.184
IPs:                      10.96.233.184
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32081/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-121000 -n functional-121000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-121000 ssh findmnt                                                                                        | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST | 09 Dec 24 15:54 PST |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-121000 ssh -- ls                                                                                          | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST | 09 Dec 24 15:54 PST |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-121000 ssh cat                                                                                            | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST | 09 Dec 24 15:54 PST |
	|           | /mount-9p/test-1733788446863898000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-121000 ssh stat                                                                                           | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST | 09 Dec 24 15:54 PST |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-121000 ssh stat                                                                                           | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST | 09 Dec 24 15:54 PST |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-121000 ssh sudo                                                                                           | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST | 09 Dec 24 15:54 PST |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-121000 ssh findmnt                                                                                        | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-121000                                                                                                 | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2908044062/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-121000 ssh findmnt                                                                                        | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST | 09 Dec 24 15:54 PST |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-121000 ssh -- ls                                                                                          | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST | 09 Dec 24 15:54 PST |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-121000 ssh sudo                                                                                           | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-121000                                                                                                 | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3252502787/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-121000                                                                                                 | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3252502787/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-121000                                                                                                 | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3252502787/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-121000 ssh findmnt                                                                                        | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-121000 ssh findmnt                                                                                        | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST | 09 Dec 24 15:54 PST |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-121000 ssh findmnt                                                                                        | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST |                     |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-121000 ssh findmnt                                                                                        | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST | 09 Dec 24 15:54 PST |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-121000 ssh findmnt                                                                                        | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST | 09 Dec 24 15:54 PST |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-121000 ssh findmnt                                                                                        | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST | 09 Dec 24 15:54 PST |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-121000                                                                                                 | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-121000                                                                                                 | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-121000                                                                                                 | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-121000 --dry-run                                                                                       | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-121000 | jenkins | v1.34.0 | 09 Dec 24 15:54 PST |                     |
	|           | -p functional-121000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 15:54:16
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 15:54:16.248521    2585 out.go:345] Setting OutFile to fd 1 ...
	I1209 15:54:16.248700    2585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 15:54:16.248703    2585 out.go:358] Setting ErrFile to fd 2...
	I1209 15:54:16.248706    2585 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 15:54:16.248834    2585 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 15:54:16.249885    2585 out.go:352] Setting JSON to false
	I1209 15:54:16.268295    2585 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1426,"bootTime":1733787030,"procs":530,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 15:54:16.268361    2585 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 15:54:16.272773    2585 out.go:177] * [functional-121000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 15:54:16.279778    2585 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 15:54:16.279815    2585 notify.go:220] Checking for updates...
	I1209 15:54:16.287692    2585 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 15:54:16.291745    2585 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 15:54:16.294682    2585 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 15:54:16.297714    2585 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 15:54:16.300767    2585 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 15:54:16.304001    2585 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 15:54:16.304271    2585 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 15:54:16.307738    2585 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 15:54:16.314751    2585 start.go:297] selected driver: qemu2
	I1209 15:54:16.314758    2585 start.go:901] validating driver "qemu2" against &{Name:functional-121000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-121000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 15:54:16.314815    2585 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 15:54:16.317121    2585 cni.go:84] Creating CNI manager for ""
	I1209 15:54:16.317145    2585 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 15:54:16.317179    2585 start.go:340] cluster config:
	{Name:functional-121000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-121000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 15:54:16.329503    2585 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 09 23:54:09 functional-121000 dockerd[5941]: time="2024-12-09T23:54:09.564688375Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 09 23:54:11 functional-121000 dockerd[5935]: time="2024-12-09T23:54:11.655767159Z" level=info msg="ignoring event" container=81036b5720c15ae90ee6f150ca7cc83fea0a109bee5e7f090b8666abe1232852 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 09 23:54:11 functional-121000 dockerd[5941]: time="2024-12-09T23:54:11.656011481Z" level=info msg="shim disconnected" id=81036b5720c15ae90ee6f150ca7cc83fea0a109bee5e7f090b8666abe1232852 namespace=moby
	Dec 09 23:54:11 functional-121000 dockerd[5941]: time="2024-12-09T23:54:11.656041188Z" level=warning msg="cleaning up after shim disconnected" id=81036b5720c15ae90ee6f150ca7cc83fea0a109bee5e7f090b8666abe1232852 namespace=moby
	Dec 09 23:54:11 functional-121000 dockerd[5941]: time="2024-12-09T23:54:11.656047479Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 09 23:54:12 functional-121000 dockerd[5941]: time="2024-12-09T23:54:12.447155658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 09 23:54:12 functional-121000 dockerd[5941]: time="2024-12-09T23:54:12.447199906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 09 23:54:12 functional-121000 dockerd[5941]: time="2024-12-09T23:54:12.447312984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 09 23:54:12 functional-121000 dockerd[5941]: time="2024-12-09T23:54:12.447405563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 09 23:54:12 functional-121000 dockerd[5935]: time="2024-12-09T23:54:12.470074227Z" level=info msg="ignoring event" container=5883d46bdd37d84b34fb4ef6562be8f0552acc6007859a2c2bb17b4bde192eff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 09 23:54:12 functional-121000 dockerd[5941]: time="2024-12-09T23:54:12.470200929Z" level=info msg="shim disconnected" id=5883d46bdd37d84b34fb4ef6562be8f0552acc6007859a2c2bb17b4bde192eff namespace=moby
	Dec 09 23:54:12 functional-121000 dockerd[5941]: time="2024-12-09T23:54:12.470232428Z" level=warning msg="cleaning up after shim disconnected" id=5883d46bdd37d84b34fb4ef6562be8f0552acc6007859a2c2bb17b4bde192eff namespace=moby
	Dec 09 23:54:12 functional-121000 dockerd[5941]: time="2024-12-09T23:54:12.470236844Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 09 23:54:17 functional-121000 dockerd[5941]: time="2024-12-09T23:54:17.263082894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 09 23:54:17 functional-121000 dockerd[5941]: time="2024-12-09T23:54:17.263264927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 09 23:54:17 functional-121000 dockerd[5941]: time="2024-12-09T23:54:17.263276427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 09 23:54:17 functional-121000 dockerd[5941]: time="2024-12-09T23:54:17.263334341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 09 23:54:17 functional-121000 dockerd[5941]: time="2024-12-09T23:54:17.300810863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 09 23:54:17 functional-121000 dockerd[5941]: time="2024-12-09T23:54:17.300932524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 09 23:54:17 functional-121000 dockerd[5941]: time="2024-12-09T23:54:17.300960189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 09 23:54:17 functional-121000 dockerd[5941]: time="2024-12-09T23:54:17.301018187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 09 23:54:17 functional-121000 cri-dockerd[6275]: time="2024-12-09T23:54:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8a9104f7f9f5ed3a8cec0cb292f2d8907d45813240e6d33a2d91e1dff3600d28/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 09 23:54:17 functional-121000 cri-dockerd[6275]: time="2024-12-09T23:54:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d041939afb59303b61e98706b1a8ebf06eb5120f15cd6d7b530958d84345b897/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 09 23:54:17 functional-121000 dockerd[5935]: time="2024-12-09T23:54:17.562759162Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" spanID=666a23ffc22e987e traceID=d2e937c08f634a880b1493ca2e608502
	Dec 09 23:54:22 functional-121000 cri-dockerd[6275]: time="2024-12-09T23:54:22Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5883d46bdd37d       72565bf5bbedf                                                                                         10 seconds ago       Exited              echoserver-arm            2                   0a24124bf362e       hello-node-64b4f8f9ff-g9mlh
	8d1cc24e36ed4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 seconds ago       Exited              mount-munger              0                   81036b5720c15       busybox-mount
	594bb522c780b       72565bf5bbedf                                                                                         16 seconds ago       Exited              echoserver-arm            2                   407e94b2e49ae       hello-node-connect-65d86f57f4-t4vpt
	e43183c3f93de       nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be                         29 seconds ago       Running             myfrontend                0                   a10e6d7e45855       sp-pod
	b3d262c8ba4ef       nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         47 seconds ago       Running             nginx                     0                   d767a527a3580       nginx-svc
	46618a49b2288       2f6c962e7b831                                                                                         About a minute ago   Running             coredns                   2                   42bc1d067d2e8       coredns-7c65d6cfc9-wkgzl
	263a28c33c3d4       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   fd95aefdadf27       storage-provisioner
	48f26d75f95ee       021d242013305                                                                                         About a minute ago   Running             kube-proxy                2                   5d56933eb6296       kube-proxy-tqcqb
	5af0ee783e0d3       9404aea098d9e                                                                                         About a minute ago   Running             kube-controller-manager   2                   9acf86b1f967a       kube-controller-manager-functional-121000
	9c477eccf5210       d6b061e73ae45                                                                                         About a minute ago   Running             kube-scheduler            2                   560fa9b4ab79c       kube-scheduler-functional-121000
	89eac2353416d       27e3830e14027                                                                                         About a minute ago   Running             etcd                      2                   fcbb09e75b732       etcd-functional-121000
	7695338ed8808       f9c26480f1e72                                                                                         About a minute ago   Running             kube-apiserver            0                   f66dc21cf9ec4       kube-apiserver-functional-121000
	a5c1761eff438       2f6c962e7b831                                                                                         2 minutes ago        Exited              coredns                   1                   7eed401ac415e       coredns-7c65d6cfc9-wkgzl
	b3739c6d41ee5       021d242013305                                                                                         2 minutes ago        Exited              kube-proxy                1                   314138e776019       kube-proxy-tqcqb
	f18dab08a9ed1       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       1                   0643fd6f46455       storage-provisioner
	4e0be727ec9a9       27e3830e14027                                                                                         2 minutes ago        Exited              etcd                      1                   148abf8b4b7f0       etcd-functional-121000
	b223170bb9b53       9404aea098d9e                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   f21f7156c285c       kube-controller-manager-functional-121000
	ce291bd1fd956       d6b061e73ae45                                                                                         2 minutes ago        Exited              kube-scheduler            1                   1ce59252b88cd       kube-scheduler-functional-121000
	
	
	==> coredns [46618a49b228] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45040 - 8860 "HINFO IN 7146752893915971358.6437057319272008277. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010355247s
	[INFO] 10.244.0.1:33177 - 53936 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000091496s
	[INFO] 10.244.0.1:16920 - 9436 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000073121s
	[INFO] 10.244.0.1:42832 - 36380 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000036749s
	[INFO] 10.244.0.1:37508 - 28019 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001318185s
	[INFO] 10.244.0.1:10532 - 25096 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000052956s
	[INFO] 10.244.0.1:40792 - 35116 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.00009312s
	
	
	==> coredns [a5c1761eff43] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57158 - 17778 "HINFO IN 8902638586186848377.7787956701262845806. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004906583s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-121000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-121000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=functional-121000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T15_51_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Dec 2024 23:51:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-121000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Dec 2024 23:54:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Dec 2024 23:54:05 +0000   Mon, 09 Dec 2024 23:51:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Dec 2024 23:54:05 +0000   Mon, 09 Dec 2024 23:51:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Dec 2024 23:54:05 +0000   Mon, 09 Dec 2024 23:51:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Dec 2024 23:54:05 +0000   Mon, 09 Dec 2024 23:51:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-121000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 9a8a72a0986e4c7bb676859a0a95a246
	  System UUID:                9a8a72a0986e4c7bb676859a0a95a246
	  Boot ID:                    ff4a56f0-1784-4e7e-a13f-c1919ee54c8d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-g9mlh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  default                     hello-node-connect-65d86f57f4-t4vpt          0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 coredns-7c65d6cfc9-wkgzl                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m35s
	  kube-system                 etcd-functional-121000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m40s
	  kube-system                 kube-apiserver-functional-121000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-functional-121000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 kube-proxy-tqcqb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-scheduler-functional-121000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-cqlq7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-s722d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m34s                  kube-proxy       
	  Normal  Starting                 78s                    kube-proxy       
	  Normal  Starting                 2m3s                   kube-proxy       
	  Normal  Starting                 2m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m44s (x8 over 2m44s)  kubelet          Node functional-121000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m44s (x8 over 2m44s)  kubelet          Node functional-121000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m44s (x7 over 2m44s)  kubelet          Node functional-121000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m40s                  kubelet          Node functional-121000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m40s                  kubelet          Node functional-121000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m40s                  kubelet          Node functional-121000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m40s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m36s                  node-controller  Node functional-121000 event: Registered Node functional-121000 in Controller
	  Normal  NodeReady                2m36s                  kubelet          Node functional-121000 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)    kubelet          Node functional-121000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)    kubelet          Node functional-121000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m8s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m8s (x7 over 2m8s)    kubelet          Node functional-121000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m1s                   node-controller  Node functional-121000 event: Registered Node functional-121000 in Controller
	  Normal  Starting                 82s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s (x8 over 82s)      kubelet          Node functional-121000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x8 over 82s)      kubelet          Node functional-121000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x7 over 82s)      kubelet          Node functional-121000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           76s                    node-controller  Node functional-121000 event: Registered Node functional-121000 in Controller
	
	
	==> dmesg <==
	[ +11.858575] systemd-fstab-generator[5459]: Ignoring "noauto" option for root device
	[  +0.053892] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.108581] systemd-fstab-generator[5492]: Ignoring "noauto" option for root device
	[  +0.100984] systemd-fstab-generator[5504]: Ignoring "noauto" option for root device
	[  +0.116896] systemd-fstab-generator[5518]: Ignoring "noauto" option for root device
	[  +5.116863] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.315408] systemd-fstab-generator[6157]: Ignoring "noauto" option for root device
	[  +0.085163] systemd-fstab-generator[6169]: Ignoring "noauto" option for root device
	[  +0.086209] systemd-fstab-generator[6181]: Ignoring "noauto" option for root device
	[  +0.102843] systemd-fstab-generator[6267]: Ignoring "noauto" option for root device
	[  +0.224042] systemd-fstab-generator[6434]: Ignoring "noauto" option for root device
	[  +0.952724] systemd-fstab-generator[6558]: Ignoring "noauto" option for root device
	[Dec 9 23:53] kauditd_printk_skb: 199 callbacks suppressed
	[  +9.590161] kauditd_printk_skb: 35 callbacks suppressed
	[  +6.495903] systemd-fstab-generator[7606]: Ignoring "noauto" option for root device
	[  +5.061879] kauditd_printk_skb: 27 callbacks suppressed
	[  +6.255246] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.052563] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.406300] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.788580] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.025850] kauditd_printk_skb: 24 callbacks suppressed
	[Dec 9 23:54] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.140562] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.390431] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.550400] kauditd_printk_skb: 20 callbacks suppressed
	
	
	==> etcd [4e0be727ec9a] <==
	{"level":"info","ts":"2024-12-09T23:52:17.386485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-09T23:52:17.386560Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-12-09T23:52:17.386958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-12-09T23:52:17.387046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-12-09T23:52:17.387102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-12-09T23:52:17.387147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-12-09T23:52:17.391797Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T23:52:17.392541Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T23:52:17.391802Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-121000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-09T23:52:17.393401Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-09T23:52:17.393506Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-09T23:52:17.394941Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T23:52:17.395827Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T23:52:17.397687Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-12-09T23:52:17.398398Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-09T23:52:46.483461Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-12-09T23:52:46.483498Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-121000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-12-09T23:52:46.483544Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-09T23:52:46.483591Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-09T23:52:46.505638Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-09T23:52:46.505673Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-09T23:52:46.506885Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-12-09T23:52:46.508214Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-12-09T23:52:46.508249Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-12-09T23:52:46.508254Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-121000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [89eac2353416] <==
	{"level":"info","ts":"2024-12-09T23:53:01.097808Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-12-09T23:53:01.097859Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T23:53:01.097900Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-09T23:53:01.099148Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T23:53:01.099873Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-09T23:53:01.099938Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-12-09T23:53:01.100510Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-12-09T23:53:01.102187Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-09T23:53:01.103899Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-09T23:53:02.794126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-12-09T23:53:02.794264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-12-09T23:53:02.794342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-12-09T23:53:02.794382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-12-09T23:53:02.794419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-12-09T23:53:02.794448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-12-09T23:53:02.794468Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-12-09T23:53:02.798983Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-121000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-09T23:53:02.799057Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T23:53:02.799420Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-09T23:53:02.799459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-09T23:53:02.799498Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-09T23:53:02.801303Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T23:53:02.801692Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-09T23:53:02.803729Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-12-09T23:53:02.804782Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:54:22 up 3 min,  0 users,  load average: 0.87, 0.58, 0.24
	Linux functional-121000 5.10.207 #1 SMP PREEMPT Wed Nov 6 19:14:02 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7695338ed880] <==
	I1209 23:53:03.398903       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1209 23:53:03.398905       1 cache.go:39] Caches are synced for autoregister controller
	I1209 23:53:03.428440       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1209 23:53:03.428471       1 policy_source.go:224] refreshing policies
	I1209 23:53:03.428487       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1209 23:53:03.463097       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1209 23:53:04.298691       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1209 23:53:04.404415       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.105.4]
	I1209 23:53:04.405303       1 controller.go:615] quota admission added evaluator for: endpoints
	I1209 23:53:04.411363       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1209 23:53:04.460764       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1209 23:53:04.464626       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1209 23:53:04.475275       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1209 23:53:04.482595       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1209 23:53:04.484817       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1209 23:53:24.404981       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.118.255"}
	I1209 23:53:31.093549       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.233.140"}
	I1209 23:53:42.489488       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1209 23:53:42.552595       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.233.184"}
	E1209 23:53:51.009903       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49705: use of closed network connection
	E1209 23:53:59.388466       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49715: use of closed network connection
	I1209 23:53:59.469091       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.1.123"}
	I1209 23:54:16.859380       1 controller.go:615] quota admission added evaluator for: namespaces
	I1209 23:54:16.970566       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.111.197"}
	I1209 23:54:16.980088       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.208.60"}
	
	
	==> kube-controller-manager [5af0ee783e0d] <==
	I1209 23:54:05.124645       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-121000"
	I1209 23:54:07.474332       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="23.207µs"
	I1209 23:54:12.417388       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="49.123µs"
	I1209 23:54:12.588576       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="27.749µs"
	I1209 23:54:16.893839       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="10.632552ms"
	E1209 23:54:16.893908       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1209 23:54:16.897704       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.048046ms"
	E1209 23:54:16.897749       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1209 23:54:16.898085       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="2.57409ms"
	E1209 23:54:16.898104       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1209 23:54:16.903587       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="3.410342ms"
	E1209 23:54:16.903604       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1209 23:54:16.903858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.282682ms"
	E1209 23:54:16.903882       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1209 23:54:16.914577       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.332861ms"
	I1209 23:54:16.917318       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="2.64042ms"
	I1209 23:54:16.917864       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="21.707µs"
	I1209 23:54:16.924684       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="23.999µs"
	I1209 23:54:16.957396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="32.460545ms"
	I1209 23:54:16.963247       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.82519ms"
	I1209 23:54:16.963292       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="24.833µs"
	I1209 23:54:16.967738       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="10.667µs"
	I1209 23:54:19.396436       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="26.165µs"
	I1209 23:54:22.705667       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.661008ms"
	I1209 23:54:22.705802       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="15.624µs"
	
	
	==> kube-controller-manager [b223170bb9b5] <==
	I1209 23:52:21.249661       1 shared_informer.go:320] Caches are synced for PV protection
	I1209 23:52:21.250740       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1209 23:52:21.251875       1 shared_informer.go:320] Caches are synced for endpoint
	I1209 23:52:21.278959       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1209 23:52:21.279000       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1209 23:52:21.279034       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1209 23:52:21.279090       1 shared_informer.go:320] Caches are synced for TTL
	I1209 23:52:21.279124       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1209 23:52:21.280030       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1209 23:52:21.280111       1 shared_informer.go:320] Caches are synced for crt configmap
	I1209 23:52:21.281322       1 shared_informer.go:320] Caches are synced for expand
	I1209 23:52:21.284784       1 shared_informer.go:320] Caches are synced for node
	I1209 23:52:21.284853       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1209 23:52:21.284867       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1209 23:52:21.284870       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1209 23:52:21.284934       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1209 23:52:21.285003       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-121000"
	I1209 23:52:21.346483       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1209 23:52:21.428587       1 shared_informer.go:320] Caches are synced for disruption
	I1209 23:52:21.453888       1 shared_informer.go:320] Caches are synced for deployment
	I1209 23:52:21.487328       1 shared_informer.go:320] Caches are synced for resource quota
	I1209 23:52:21.535784       1 shared_informer.go:320] Caches are synced for resource quota
	I1209 23:52:21.909703       1 shared_informer.go:320] Caches are synced for garbage collector
	I1209 23:52:21.953457       1 shared_informer.go:320] Caches are synced for garbage collector
	I1209 23:52:21.953527       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [48f26d75f95e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 23:53:03.955318       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 23:53:03.958729       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1209 23:53:03.958756       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 23:53:03.966261       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 23:53:03.966277       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 23:53:03.966287       1 server_linux.go:169] "Using iptables Proxier"
	I1209 23:53:03.966844       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 23:53:03.966927       1 server.go:483] "Version info" version="v1.31.2"
	I1209 23:53:03.966936       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:53:03.967506       1 config.go:199] "Starting service config controller"
	I1209 23:53:03.967516       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 23:53:03.967527       1 config.go:105] "Starting endpoint slice config controller"
	I1209 23:53:03.967529       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 23:53:03.967996       1 config.go:328] "Starting node config controller"
	I1209 23:53:03.967999       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 23:53:04.067605       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 23:53:04.067662       1 shared_informer.go:320] Caches are synced for service config
	I1209 23:53:04.068097       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b3739c6d41ee] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1209 23:52:19.225518       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1209 23:52:19.230394       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1209 23:52:19.230493       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1209 23:52:19.238139       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1209 23:52:19.238156       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1209 23:52:19.238167       1 server_linux.go:169] "Using iptables Proxier"
	I1209 23:52:19.238814       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1209 23:52:19.238919       1 server.go:483] "Version info" version="v1.31.2"
	I1209 23:52:19.238923       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:52:19.239601       1 config.go:199] "Starting service config controller"
	I1209 23:52:19.239641       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1209 23:52:19.239667       1 config.go:105] "Starting endpoint slice config controller"
	I1209 23:52:19.239696       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1209 23:52:19.239890       1 config.go:328] "Starting node config controller"
	I1209 23:52:19.239911       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1209 23:52:19.339893       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1209 23:52:19.339955       1 shared_informer.go:320] Caches are synced for node config
	I1209 23:52:19.339893       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [9c477eccf521] <==
	I1209 23:53:01.308825       1 serving.go:386] Generated self-signed cert in-memory
	W1209 23:53:03.314408       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 23:53:03.314420       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 23:53:03.314424       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 23:53:03.314427       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 23:53:03.345763       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1209 23:53:03.345779       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:53:03.346655       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1209 23:53:03.346708       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 23:53:03.346720       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 23:53:03.346727       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 23:53:03.450713       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ce291bd1fd95] <==
	I1209 23:52:16.236699       1 serving.go:386] Generated self-signed cert in-memory
	W1209 23:52:17.892214       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1209 23:52:17.892231       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1209 23:52:17.892235       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1209 23:52:17.892239       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1209 23:52:17.944673       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1209 23:52:17.946346       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1209 23:52:17.947354       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1209 23:52:17.947429       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1209 23:52:17.947499       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1209 23:52:17.947616       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1209 23:52:18.047702       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1209 23:52:46.469706       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 09 23:54:07 functional-121000 kubelet[6565]: I1209 23:54:07.467796    6565 scope.go:117] "RemoveContainer" containerID="fc0e0e8da45ef589abf131814ea91a0c1d046422e3768077707295edb87ae770"
	Dec 09 23:54:07 functional-121000 kubelet[6565]: I1209 23:54:07.467960    6565 scope.go:117] "RemoveContainer" containerID="594bb522c780bb44eaf8d643b5a0b5bb2c1c255ced1eb928d48ede5359f04a93"
	Dec 09 23:54:07 functional-121000 kubelet[6565]: E1209 23:54:07.468048    6565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-t4vpt_default(46dba77f-384e-4d88-a885-70f14f3b34e9)\"" pod="default/hello-node-connect-65d86f57f4-t4vpt" podUID="46dba77f-384e-4d88-a885-70f14f3b34e9"
	Dec 09 23:54:07 functional-121000 kubelet[6565]: I1209 23:54:07.754233    6565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl8tm\" (UniqueName: \"kubernetes.io/projected/d57622ac-d71d-4a27-a0ba-aabf0f342fab-kube-api-access-dl8tm\") pod \"busybox-mount\" (UID: \"d57622ac-d71d-4a27-a0ba-aabf0f342fab\") " pod="default/busybox-mount"
	Dec 09 23:54:07 functional-121000 kubelet[6565]: I1209 23:54:07.754305    6565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/d57622ac-d71d-4a27-a0ba-aabf0f342fab-test-volume\") pod \"busybox-mount\" (UID: \"d57622ac-d71d-4a27-a0ba-aabf0f342fab\") " pod="default/busybox-mount"
	Dec 09 23:54:11 functional-121000 kubelet[6565]: I1209 23:54:11.809025    6565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/d57622ac-d71d-4a27-a0ba-aabf0f342fab-test-volume\") pod \"d57622ac-d71d-4a27-a0ba-aabf0f342fab\" (UID: \"d57622ac-d71d-4a27-a0ba-aabf0f342fab\") "
	Dec 09 23:54:11 functional-121000 kubelet[6565]: I1209 23:54:11.809084    6565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl8tm\" (UniqueName: \"kubernetes.io/projected/d57622ac-d71d-4a27-a0ba-aabf0f342fab-kube-api-access-dl8tm\") pod \"d57622ac-d71d-4a27-a0ba-aabf0f342fab\" (UID: \"d57622ac-d71d-4a27-a0ba-aabf0f342fab\") "
	Dec 09 23:54:11 functional-121000 kubelet[6565]: I1209 23:54:11.809302    6565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d57622ac-d71d-4a27-a0ba-aabf0f342fab-test-volume" (OuterVolumeSpecName: "test-volume") pod "d57622ac-d71d-4a27-a0ba-aabf0f342fab" (UID: "d57622ac-d71d-4a27-a0ba-aabf0f342fab"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Dec 09 23:54:11 functional-121000 kubelet[6565]: I1209 23:54:11.809913    6565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d57622ac-d71d-4a27-a0ba-aabf0f342fab-kube-api-access-dl8tm" (OuterVolumeSpecName: "kube-api-access-dl8tm") pod "d57622ac-d71d-4a27-a0ba-aabf0f342fab" (UID: "d57622ac-d71d-4a27-a0ba-aabf0f342fab"). InnerVolumeSpecName "kube-api-access-dl8tm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 09 23:54:11 functional-121000 kubelet[6565]: I1209 23:54:11.912553    6565 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dl8tm\" (UniqueName: \"kubernetes.io/projected/d57622ac-d71d-4a27-a0ba-aabf0f342fab-kube-api-access-dl8tm\") on node \"functional-121000\" DevicePath \"\""
	Dec 09 23:54:11 functional-121000 kubelet[6565]: I1209 23:54:11.912607    6565 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/d57622ac-d71d-4a27-a0ba-aabf0f342fab-test-volume\") on node \"functional-121000\" DevicePath \"\""
	Dec 09 23:54:12 functional-121000 kubelet[6565]: I1209 23:54:12.391798    6565 scope.go:117] "RemoveContainer" containerID="606c95de3607e6942ac2e80dc6e8fbba9d7821a820f7b835e822830754dbb509"
	Dec 09 23:54:12 functional-121000 kubelet[6565]: I1209 23:54:12.579071    6565 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81036b5720c15ae90ee6f150ca7cc83fea0a109bee5e7f090b8666abe1232852"
	Dec 09 23:54:12 functional-121000 kubelet[6565]: I1209 23:54:12.584224    6565 scope.go:117] "RemoveContainer" containerID="606c95de3607e6942ac2e80dc6e8fbba9d7821a820f7b835e822830754dbb509"
	Dec 09 23:54:12 functional-121000 kubelet[6565]: I1209 23:54:12.584342    6565 scope.go:117] "RemoveContainer" containerID="5883d46bdd37d84b34fb4ef6562be8f0552acc6007859a2c2bb17b4bde192eff"
	Dec 09 23:54:12 functional-121000 kubelet[6565]: E1209 23:54:12.584437    6565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-g9mlh_default(53195f68-66ca-4017-90d4-c23e6e169cfa)\"" pod="default/hello-node-64b4f8f9ff-g9mlh" podUID="53195f68-66ca-4017-90d4-c23e6e169cfa"
	Dec 09 23:54:16 functional-121000 kubelet[6565]: E1209 23:54:16.912268    6565 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d57622ac-d71d-4a27-a0ba-aabf0f342fab" containerName="mount-munger"
	Dec 09 23:54:16 functional-121000 kubelet[6565]: I1209 23:54:16.912301    6565 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57622ac-d71d-4a27-a0ba-aabf0f342fab" containerName="mount-munger"
	Dec 09 23:54:17 functional-121000 kubelet[6565]: I1209 23:54:17.066130    6565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b05ea8ce-194b-46da-ba85-7787a1b3a20f-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-cqlq7\" (UID: \"b05ea8ce-194b-46da-ba85-7787a1b3a20f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-cqlq7"
	Dec 09 23:54:17 functional-121000 kubelet[6565]: I1209 23:54:17.066158    6565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75rzp\" (UniqueName: \"kubernetes.io/projected/954ec464-150a-48ce-aa06-e6aa3b899e96-kube-api-access-75rzp\") pod \"kubernetes-dashboard-695b96c756-s722d\" (UID: \"954ec464-150a-48ce-aa06-e6aa3b899e96\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-s722d"
	Dec 09 23:54:17 functional-121000 kubelet[6565]: I1209 23:54:17.066171    6565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/954ec464-150a-48ce-aa06-e6aa3b899e96-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-s722d\" (UID: \"954ec464-150a-48ce-aa06-e6aa3b899e96\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-s722d"
	Dec 09 23:54:17 functional-121000 kubelet[6565]: I1209 23:54:17.066180    6565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crgtq\" (UniqueName: \"kubernetes.io/projected/b05ea8ce-194b-46da-ba85-7787a1b3a20f-kube-api-access-crgtq\") pod \"dashboard-metrics-scraper-c5db448b4-cqlq7\" (UID: \"b05ea8ce-194b-46da-ba85-7787a1b3a20f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-cqlq7"
	Dec 09 23:54:19 functional-121000 kubelet[6565]: I1209 23:54:19.387917    6565 scope.go:117] "RemoveContainer" containerID="594bb522c780bb44eaf8d643b5a0b5bb2c1c255ced1eb928d48ede5359f04a93"
	Dec 09 23:54:19 functional-121000 kubelet[6565]: E1209 23:54:19.388016    6565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-t4vpt_default(46dba77f-384e-4d88-a885-70f14f3b34e9)\"" pod="default/hello-node-connect-65d86f57f4-t4vpt" podUID="46dba77f-384e-4d88-a885-70f14f3b34e9"
	Dec 09 23:54:22 functional-121000 kubelet[6565]: I1209 23:54:22.693035    6565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-s722d" podStartSLOduration=1.778576728 podStartE2EDuration="6.693025137s" podCreationTimestamp="2024-12-09 23:54:16 +0000 UTC" firstStartedPulling="2024-12-09 23:54:17.346775369 +0000 UTC m=+77.021975178" lastFinishedPulling="2024-12-09 23:54:22.261223778 +0000 UTC m=+81.936423587" observedRunningTime="2024-12-09 23:54:22.69288256 +0000 UTC m=+82.368082327" watchObservedRunningTime="2024-12-09 23:54:22.693025137 +0000 UTC m=+82.368224946"
	
	
	==> storage-provisioner [263a28c33c3d] <==
	I1209 23:53:03.893513       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 23:53:03.899636       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 23:53:03.899659       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 23:53:21.312783       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 23:53:21.313291       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-121000_3ca77970-af0e-4754-9912-aa333fcc0bfa!
	I1209 23:53:21.313503       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"160f4d0f-6675-4946-8f88-a19b94757791", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-121000_3ca77970-af0e-4754-9912-aa333fcc0bfa became leader
	I1209 23:53:21.422545       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-121000_3ca77970-af0e-4754-9912-aa333fcc0bfa!
	I1209 23:53:38.811103       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1209 23:53:38.811137       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    bd12ca4e-694e-4c7a-ba42-702d2920df95 388 0 2024-12-09 23:51:48 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-12-09 23:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-b9b9ff7e-cffe-400e-aa1a-f756e42a6539 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  b9b9ff7e-cffe-400e-aa1a-f756e42a6539 708 0 2024-12-09 23:53:38 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-12-09 23:53:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-12-09 23:53:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1209 23:53:38.811635       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-b9b9ff7e-cffe-400e-aa1a-f756e42a6539" provisioned
	I1209 23:53:38.811647       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1209 23:53:38.811650       1 volume_store.go:212] Trying to save persistentvolume "pvc-b9b9ff7e-cffe-400e-aa1a-f756e42a6539"
	I1209 23:53:38.811977       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"b9b9ff7e-cffe-400e-aa1a-f756e42a6539", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1209 23:53:38.816460       1 volume_store.go:219] persistentvolume "pvc-b9b9ff7e-cffe-400e-aa1a-f756e42a6539" saved
	I1209 23:53:38.816706       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"b9b9ff7e-cffe-400e-aa1a-f756e42a6539", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-b9b9ff7e-cffe-400e-aa1a-f756e42a6539
	
	
	==> storage-provisioner [f18dab08a9ed] <==
	I1209 23:52:19.167862       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1209 23:52:19.177325       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1209 23:52:19.177341       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1209 23:52:36.602482       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1209 23:52:36.603260       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-121000_8ed52cc1-604d-4f17-9396-1b1f7e6a23aa!
	I1209 23:52:36.602885       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"160f4d0f-6675-4946-8f88-a19b94757791", APIVersion:"v1", ResourceVersion:"534", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-121000_8ed52cc1-604d-4f17-9396-1b1f7e6a23aa became leader
	I1209 23:52:36.705888       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-121000_8ed52cc1-604d-4f17-9396-1b1f7e6a23aa!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-121000 -n functional-121000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-121000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-c5db448b4-cqlq7
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-121000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-cqlq7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-121000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-cqlq7: exit status 1 (39.450875ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-121000/192.168.105.4
	Start Time:       Mon, 09 Dec 2024 15:54:07 -0800
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://8d1cc24e36ed49649f9e6ea35b4e611987f3c55565f0fd1614783c2814161809
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 09 Dec 2024 15:54:09 -0800
	      Finished:     Mon, 09 Dec 2024 15:54:09 -0800
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dl8tm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-dl8tm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  16s   default-scheduler  Successfully assigned default/busybox-mount to functional-121000
	  Normal  Pulling    15s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     14s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.425s (1.425s including waiting). Image size: 3547125 bytes.
	  Normal  Created    14s   kubelet            Created container mount-munger
	  Normal  Started    14s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-c5db448b4-cqlq7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-121000 describe pod busybox-mount dashboard-metrics-scraper-c5db448b4-cqlq7: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (40.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (725.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-677000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E1209 15:56:39.876434    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:57:07.603440    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:58:30.930020    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:58:30.937672    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:58:30.951092    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:58:30.974507    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:58:31.017915    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:58:31.101344    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:58:31.264770    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:58:31.588340    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:58:32.232100    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:58:33.515854    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:58:36.079571    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:58:41.203354    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:58:51.447078    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:59:11.930793    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:59:52.894407    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:01:14.848434    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:01:39.914843    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:03:30.971674    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:03:58.703894    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-677000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 52 (12m5.303982s)

                                                
                                                
-- stdout --
	* [ha-677000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-677000" primary control-plane node in "ha-677000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Deleting "ha-677000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 15:54:26.287428    2642 out.go:345] Setting OutFile to fd 1 ...
	I1209 15:54:26.287577    2642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 15:54:26.287580    2642 out.go:358] Setting ErrFile to fd 2...
	I1209 15:54:26.287583    2642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 15:54:26.287697    2642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 15:54:26.288831    2642 out.go:352] Setting JSON to false
	I1209 15:54:26.307982    2642 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1436,"bootTime":1733787030,"procs":531,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 15:54:26.308067    2642 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 15:54:26.309398    2642 out.go:177] * [ha-677000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 15:54:26.316622    2642 notify.go:220] Checking for updates...
	I1209 15:54:26.320454    2642 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 15:54:26.323566    2642 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 15:54:26.326608    2642 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 15:54:26.330544    2642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 15:54:26.333563    2642 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 15:54:26.336606    2642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 15:54:26.339754    2642 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 15:54:26.342545    2642 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 15:54:26.349470    2642 start.go:297] selected driver: qemu2
	I1209 15:54:26.349477    2642 start.go:901] validating driver "qemu2" against <nil>
	I1209 15:54:26.349487    2642 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 15:54:26.352246    2642 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 15:54:26.356601    2642 out.go:177] * Automatically selected the socket_vmnet network
	I1209 15:54:26.359718    2642 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 15:54:26.359743    2642 cni.go:84] Creating CNI manager for ""
	I1209 15:54:26.359761    2642 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1209 15:54:26.359765    2642 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 15:54:26.359799    2642 start.go:340] cluster config:
	{Name:ha-677000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-677000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 15:54:26.364349    2642 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 15:54:26.372596    2642 out.go:177] * Starting "ha-677000" primary control-plane node in "ha-677000" cluster
	I1209 15:54:26.376572    2642 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 15:54:26.376592    2642 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 15:54:26.376604    2642 cache.go:56] Caching tarball of preloaded images
	I1209 15:54:26.376686    2642 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 15:54:26.376692    2642 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 15:54:26.376900    2642 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/ha-677000/config.json ...
	I1209 15:54:26.376911    2642 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/ha-677000/config.json: {Name:mkadc81b08a8aa06e8af46eabbedfd5028badd19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 15:54:26.377389    2642 start.go:360] acquireMachinesLock for ha-677000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 15:54:26.377441    2642 start.go:364] duration metric: took 43.458µs to acquireMachinesLock for "ha-677000"
	I1209 15:54:26.377453    2642 start.go:93] Provisioning new machine with config: &{Name:ha-677000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-677000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 15:54:26.377492    2642 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 15:54:26.386539    2642 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 15:54:26.412775    2642 start.go:159] libmachine.API.Create for "ha-677000" (driver="qemu2")
	I1209 15:54:26.412826    2642 client.go:168] LocalClient.Create starting
	I1209 15:54:26.412915    2642 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 15:54:26.412958    2642 main.go:141] libmachine: Decoding PEM data...
	I1209 15:54:26.412971    2642 main.go:141] libmachine: Parsing certificate...
	I1209 15:54:26.413009    2642 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 15:54:26.413038    2642 main.go:141] libmachine: Decoding PEM data...
	I1209 15:54:26.413045    2642 main.go:141] libmachine: Parsing certificate...
	I1209 15:54:26.413421    2642 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 15:54:26.642092    2642 main.go:141] libmachine: Creating SSH key...
	I1209 15:54:26.735675    2642 main.go:141] libmachine: Creating Disk image...
	I1209 15:54:26.735682    2642 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 15:54:26.735888    2642 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/disk.qcow2
	I1209 15:54:26.752369    2642 main.go:141] libmachine: STDOUT: 
	I1209 15:54:26.752385    2642 main.go:141] libmachine: STDERR: 
	I1209 15:54:26.752442    2642 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/disk.qcow2 +20000M
	I1209 15:54:26.760951    2642 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 15:54:26.760969    2642 main.go:141] libmachine: STDERR: 
	I1209 15:54:26.760989    2642 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/disk.qcow2
	I1209 15:54:26.760993    2642 main.go:141] libmachine: Starting QEMU VM...
	I1209 15:54:26.761005    2642 qemu.go:418] Using hvf for hardware acceleration
	I1209 15:54:26.761031    2642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d1:9c:6a:b4:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/disk.qcow2
	I1209 15:54:26.806202    2642 main.go:141] libmachine: STDOUT: 
	I1209 15:54:26.806224    2642 main.go:141] libmachine: STDERR: 
	I1209 15:54:26.806228    2642 main.go:141] libmachine: Attempt 0
	I1209 15:54:26.806253    2642 main.go:141] libmachine: Searching for a2:d1:9c:6a:b4:87 in /var/db/dhcpd_leases ...
	I1209 15:54:26.806349    2642 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1209 15:54:26.806367    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:02:e3:ad:d8:3a:c0 ID:1,2:e3:ad:d8:3a:c0 Lease:0x6757908a}
	I1209 15:54:26.806374    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:76:ab:d5:54:89:cf ID:1,76:ab:d5:54:89:cf Lease:0x67578238}
	I1209 15:54:26.806380    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a2:77:94:2c:03:8b ID:1,a2:77:94:2c:3:8b Lease:0x67578205}
	I1209 15:54:26.806397    2642 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67578bb4}
	I1209 15:54:28.808555    2642 main.go:141] libmachine: Attempt 1
	I1209 15:54:28.808664    2642 main.go:141] libmachine: Searching for a2:d1:9c:6a:b4:87 in /var/db/dhcpd_leases ...
	I1209 15:54:28.809253    2642 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1209 15:54:28.809329    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:02:e3:ad:d8:3a:c0 ID:1,2:e3:ad:d8:3a:c0 Lease:0x6757908a}
	I1209 15:54:28.809368    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:76:ab:d5:54:89:cf ID:1,76:ab:d5:54:89:cf Lease:0x67578238}
	I1209 15:54:28.809399    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a2:77:94:2c:03:8b ID:1,a2:77:94:2c:3:8b Lease:0x67578205}
	I1209 15:54:28.809431    2642 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67578bb4}
	I1209 15:54:30.811661    2642 main.go:141] libmachine: Attempt 2
	I1209 15:54:30.811811    2642 main.go:141] libmachine: Searching for a2:d1:9c:6a:b4:87 in /var/db/dhcpd_leases ...
	I1209 15:54:30.812181    2642 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1209 15:54:30.812238    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:02:e3:ad:d8:3a:c0 ID:1,2:e3:ad:d8:3a:c0 Lease:0x6757908a}
	I1209 15:54:30.812270    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:76:ab:d5:54:89:cf ID:1,76:ab:d5:54:89:cf Lease:0x67578238}
	I1209 15:54:30.812299    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a2:77:94:2c:03:8b ID:1,a2:77:94:2c:3:8b Lease:0x67578205}
	I1209 15:54:30.812328    2642 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67578bb4}
	I1209 15:54:32.813180    2642 main.go:141] libmachine: Attempt 3
	I1209 15:54:32.813248    2642 main.go:141] libmachine: Searching for a2:d1:9c:6a:b4:87 in /var/db/dhcpd_leases ...
	I1209 15:54:32.813325    2642 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1209 15:54:32.813341    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:02:e3:ad:d8:3a:c0 ID:1,2:e3:ad:d8:3a:c0 Lease:0x6757908a}
	I1209 15:54:32.813352    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:76:ab:d5:54:89:cf ID:1,76:ab:d5:54:89:cf Lease:0x67578238}
	I1209 15:54:32.813358    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a2:77:94:2c:03:8b ID:1,a2:77:94:2c:3:8b Lease:0x67578205}
	I1209 15:54:32.813366    2642 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67578bb4}
	I1209 15:54:34.815409    2642 main.go:141] libmachine: Attempt 4
	I1209 15:54:34.815476    2642 main.go:141] libmachine: Searching for a2:d1:9c:6a:b4:87 in /var/db/dhcpd_leases ...
	I1209 15:54:34.815536    2642 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1209 15:54:34.815548    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:02:e3:ad:d8:3a:c0 ID:1,2:e3:ad:d8:3a:c0 Lease:0x6757908a}
	I1209 15:54:34.815554    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:76:ab:d5:54:89:cf ID:1,76:ab:d5:54:89:cf Lease:0x67578238}
	I1209 15:54:34.815561    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a2:77:94:2c:03:8b ID:1,a2:77:94:2c:3:8b Lease:0x67578205}
	I1209 15:54:34.815568    2642 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67578bb4}
	I1209 15:54:36.817608    2642 main.go:141] libmachine: Attempt 5
	I1209 15:54:36.817624    2642 main.go:141] libmachine: Searching for a2:d1:9c:6a:b4:87 in /var/db/dhcpd_leases ...
	I1209 15:54:36.817672    2642 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1209 15:54:36.817679    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:02:e3:ad:d8:3a:c0 ID:1,2:e3:ad:d8:3a:c0 Lease:0x6757908a}
	I1209 15:54:36.817683    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:76:ab:d5:54:89:cf ID:1,76:ab:d5:54:89:cf Lease:0x67578238}
	I1209 15:54:36.817689    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a2:77:94:2c:03:8b ID:1,a2:77:94:2c:3:8b Lease:0x67578205}
	I1209 15:54:36.817694    2642 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67578bb4}
	I1209 15:54:38.819786    2642 main.go:141] libmachine: Attempt 6
	I1209 15:54:38.819811    2642 main.go:141] libmachine: Searching for a2:d1:9c:6a:b4:87 in /var/db/dhcpd_leases ...
	I1209 15:54:38.819902    2642 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1209 15:54:38.819912    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:02:e3:ad:d8:3a:c0 ID:1,2:e3:ad:d8:3a:c0 Lease:0x6757908a}
	I1209 15:54:38.819919    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:76:ab:d5:54:89:cf ID:1,76:ab:d5:54:89:cf Lease:0x67578238}
	I1209 15:54:38.819924    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a2:77:94:2c:03:8b ID:1,a2:77:94:2c:3:8b Lease:0x67578205}
	I1209 15:54:38.819929    2642 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67578bb4}
	I1209 15:54:40.821994    2642 main.go:141] libmachine: Attempt 7
	I1209 15:54:40.822038    2642 main.go:141] libmachine: Searching for a2:d1:9c:6a:b4:87 in /var/db/dhcpd_leases ...
	I1209 15:54:40.822189    2642 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1209 15:54:40.822204    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:a2:d1:9c:6a:b4:87 ID:1,a2:d1:9c:6a:b4:87 Lease:0x6757914f}
	I1209 15:54:40.822208    2642 main.go:141] libmachine: Found match: a2:d1:9c:6a:b4:87
	I1209 15:54:40.822233    2642 main.go:141] libmachine: IP: 192.168.105.5
	I1209 15:54:40.822239    2642 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I1209 16:00:26.413178    2642 start.go:128] duration metric: took 6m0.03753175s to createHost
	I1209 16:00:26.413987    2642 start.go:83] releasing machines lock for "ha-677000", held for 6m0.037960875s
	W1209 16:00:26.414105    2642 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I1209 16:00:26.424289    2642 out.go:177] * Deleting "ha-677000" in qemu2 ...
	W1209 16:00:26.457634    2642 out.go:270] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1209 16:00:26.457660    2642 start.go:729] Will try again in 5 seconds ...
	I1209 16:00:31.459863    2642 start.go:360] acquireMachinesLock for ha-677000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:00:31.460345    2642 start.go:364] duration metric: took 380.625µs to acquireMachinesLock for "ha-677000"
	I1209 16:00:31.460473    2642 start.go:93] Provisioning new machine with config: &{Name:ha-677000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-677000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:00:31.460719    2642 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:00:31.465300    2642 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 16:00:31.515320    2642 start.go:159] libmachine.API.Create for "ha-677000" (driver="qemu2")
	I1209 16:00:31.515497    2642 client.go:168] LocalClient.Create starting
	I1209 16:00:31.515648    2642 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:00:31.515721    2642 main.go:141] libmachine: Decoding PEM data...
	I1209 16:00:31.515741    2642 main.go:141] libmachine: Parsing certificate...
	I1209 16:00:31.515839    2642 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:00:31.515894    2642 main.go:141] libmachine: Decoding PEM data...
	I1209 16:00:31.515909    2642 main.go:141] libmachine: Parsing certificate...
	I1209 16:00:31.519718    2642 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:00:31.697315    2642 main.go:141] libmachine: Creating SSH key...
	I1209 16:00:31.965526    2642 main.go:141] libmachine: Creating Disk image...
	I1209 16:00:31.965539    2642 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:00:31.965795    2642 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/disk.qcow2
	I1209 16:00:31.976098    2642 main.go:141] libmachine: STDOUT: 
	I1209 16:00:31.976135    2642 main.go:141] libmachine: STDERR: 
	I1209 16:00:31.976201    2642 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/disk.qcow2 +20000M
	I1209 16:00:31.984794    2642 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:00:31.984812    2642 main.go:141] libmachine: STDERR: 
	I1209 16:00:31.984834    2642 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/disk.qcow2
	I1209 16:00:31.984840    2642 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:00:31.984849    2642 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:00:31.984879    2642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:d8:ba:66:76:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/disk.qcow2
	I1209 16:00:32.021528    2642 main.go:141] libmachine: STDOUT: 
	I1209 16:00:32.021565    2642 main.go:141] libmachine: STDERR: 
	I1209 16:00:32.021569    2642 main.go:141] libmachine: Attempt 0
	I1209 16:00:32.021591    2642 main.go:141] libmachine: Searching for d2:d8:ba:66:76:a0 in /var/db/dhcpd_leases ...
	I1209 16:00:32.021716    2642 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1209 16:00:32.021726    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:a2:d1:9c:6a:b4:87 ID:1,a2:d1:9c:6a:b4:87 Lease:0x6757914f}
	I1209 16:00:32.021734    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:02:e3:ad:d8:3a:c0 ID:1,2:e3:ad:d8:3a:c0 Lease:0x6757908a}
	I1209 16:00:32.021740    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:76:ab:d5:54:89:cf ID:1,76:ab:d5:54:89:cf Lease:0x67578238}
	I1209 16:00:32.021748    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a2:77:94:2c:03:8b ID:1,a2:77:94:2c:3:8b Lease:0x67578205}
	I1209 16:00:32.021754    2642 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67578bb4}
	I1209 16:00:34.023914    2642 main.go:141] libmachine: Attempt 1
	I1209 16:00:34.024029    2642 main.go:141] libmachine: Searching for d2:d8:ba:66:76:a0 in /var/db/dhcpd_leases ...
	I1209 16:00:34.024617    2642 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1209 16:00:34.024668    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:a2:d1:9c:6a:b4:87 ID:1,a2:d1:9c:6a:b4:87 Lease:0x6757914f}
	I1209 16:00:34.024698    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:02:e3:ad:d8:3a:c0 ID:1,2:e3:ad:d8:3a:c0 Lease:0x6757908a}
	I1209 16:00:34.024727    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:76:ab:d5:54:89:cf ID:1,76:ab:d5:54:89:cf Lease:0x67578238}
	I1209 16:00:34.024757    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a2:77:94:2c:03:8b ID:1,a2:77:94:2c:3:8b Lease:0x67578205}
	I1209 16:00:34.024784    2642 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67578bb4}
	I1209 16:00:36.027055    2642 main.go:141] libmachine: Attempt 2
	I1209 16:00:36.027146    2642 main.go:141] libmachine: Searching for d2:d8:ba:66:76:a0 in /var/db/dhcpd_leases ...
	I1209 16:00:36.027526    2642 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1209 16:00:36.027582    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:a2:d1:9c:6a:b4:87 ID:1,a2:d1:9c:6a:b4:87 Lease:0x6757914f}
	I1209 16:00:36.027615    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:02:e3:ad:d8:3a:c0 ID:1,2:e3:ad:d8:3a:c0 Lease:0x6757908a}
	I1209 16:00:36.027645    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:76:ab:d5:54:89:cf ID:1,76:ab:d5:54:89:cf Lease:0x67578238}
	I1209 16:00:36.027673    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a2:77:94:2c:03:8b ID:1,a2:77:94:2c:3:8b Lease:0x67578205}
	I1209 16:00:36.027700    2642 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67578bb4}
	I1209 16:00:38.028645    2642 main.go:141] libmachine: Attempt 3
	I1209 16:00:38.028702    2642 main.go:141] libmachine: Searching for d2:d8:ba:66:76:a0 in /var/db/dhcpd_leases ...
	I1209 16:00:38.028895    2642 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1209 16:00:38.028906    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:a2:d1:9c:6a:b4:87 ID:1,a2:d1:9c:6a:b4:87 Lease:0x6757914f}
	I1209 16:00:38.028921    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:02:e3:ad:d8:3a:c0 ID:1,2:e3:ad:d8:3a:c0 Lease:0x6757908a}
	I1209 16:00:38.028926    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:76:ab:d5:54:89:cf ID:1,76:ab:d5:54:89:cf Lease:0x67578238}
	I1209 16:00:38.028932    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a2:77:94:2c:03:8b ID:1,a2:77:94:2c:3:8b Lease:0x67578205}
	I1209 16:00:38.028940    2642 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67578bb4}
	I1209 16:00:40.030978    2642 main.go:141] libmachine: Attempt 4
	I1209 16:00:40.030991    2642 main.go:141] libmachine: Searching for d2:d8:ba:66:76:a0 in /var/db/dhcpd_leases ...
	I1209 16:00:40.031062    2642 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1209 16:00:40.031069    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:a2:d1:9c:6a:b4:87 ID:1,a2:d1:9c:6a:b4:87 Lease:0x6757914f}
	I1209 16:00:40.031076    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:02:e3:ad:d8:3a:c0 ID:1,2:e3:ad:d8:3a:c0 Lease:0x6757908a}
	I1209 16:00:40.031080    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:76:ab:d5:54:89:cf ID:1,76:ab:d5:54:89:cf Lease:0x67578238}
	I1209 16:00:40.031085    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a2:77:94:2c:03:8b ID:1,a2:77:94:2c:3:8b Lease:0x67578205}
	I1209 16:00:40.031097    2642 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67578bb4}
	I1209 16:00:42.033104    2642 main.go:141] libmachine: Attempt 5
	I1209 16:00:42.033111    2642 main.go:141] libmachine: Searching for d2:d8:ba:66:76:a0 in /var/db/dhcpd_leases ...
	I1209 16:00:42.033196    2642 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1209 16:00:42.033202    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:a2:d1:9c:6a:b4:87 ID:1,a2:d1:9c:6a:b4:87 Lease:0x6757914f}
	I1209 16:00:42.033207    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:02:e3:ad:d8:3a:c0 ID:1,2:e3:ad:d8:3a:c0 Lease:0x6757908a}
	I1209 16:00:42.033217    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:76:ab:d5:54:89:cf ID:1,76:ab:d5:54:89:cf Lease:0x67578238}
	I1209 16:00:42.033224    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a2:77:94:2c:03:8b ID:1,a2:77:94:2c:3:8b Lease:0x67578205}
	I1209 16:00:42.033228    2642 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67578bb4}
	I1209 16:00:44.035275    2642 main.go:141] libmachine: Attempt 6
	I1209 16:00:44.035304    2642 main.go:141] libmachine: Searching for d2:d8:ba:66:76:a0 in /var/db/dhcpd_leases ...
	I1209 16:00:44.035407    2642 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1209 16:00:44.035420    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:a2:d1:9c:6a:b4:87 ID:1,a2:d1:9c:6a:b4:87 Lease:0x6757914f}
	I1209 16:00:44.035426    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:02:e3:ad:d8:3a:c0 ID:1,2:e3:ad:d8:3a:c0 Lease:0x6757908a}
	I1209 16:00:44.035431    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:76:ab:d5:54:89:cf ID:1,76:ab:d5:54:89:cf Lease:0x67578238}
	I1209 16:00:44.035438    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:a2:77:94:2c:03:8b ID:1,a2:77:94:2c:3:8b Lease:0x67578205}
	I1209 16:00:44.035444    2642 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x67578bb4}
	I1209 16:00:46.037594    2642 main.go:141] libmachine: Attempt 7
	I1209 16:00:46.037679    2642 main.go:141] libmachine: Searching for d2:d8:ba:66:76:a0 in /var/db/dhcpd_leases ...
	I1209 16:00:46.038234    2642 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I1209 16:00:46.038272    2642 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:d2:d8:ba:66:76:a0 ID:1,d2:d8:ba:66:76:a0 Lease:0x675792bc}
	I1209 16:00:46.038283    2642 main.go:141] libmachine: Found match: d2:d8:ba:66:76:a0
	I1209 16:00:46.038310    2642 main.go:141] libmachine: IP: 192.168.105.6
	I1209 16:00:46.038324    2642 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I1209 16:06:31.560642    2642 start.go:128] duration metric: took 6m0.056994042s to createHost
	I1209 16:06:31.560715    2642 start.go:83] releasing machines lock for "ha-677000", held for 6m0.057466583s
	W1209 16:06:31.561030    2642 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-677000" may fix it: creating host: create host timed out in 360.000000 seconds
	* Failed to start qemu2 VM. Running "minikube delete -p ha-677000" may fix it: creating host: create host timed out in 360.000000 seconds
	I1209 16:06:31.568572    2642 out.go:201] 
	W1209 16:06:31.571715    2642 out.go:270] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 seconds
	W1209 16:06:31.571794    2642 out.go:270] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1209 16:06:31.571838    2642 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1209 16:06:31.584566    2642 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-677000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000: exit status 7 (72.348166ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 16:06:31.673747    3164 status.go:393] failed to get driver ip: parsing IP: 
	E1209 16:06:31.673757    3164 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-677000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StartCluster (725.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (110.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (65.662542ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-677000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- rollout status deployment/busybox: exit status 1 (63.347333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-677000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.833125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-677000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:06:31.867117    1742 retry.go:31] will retry after 1.463715624s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.264584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-677000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:06:33.439444    1742 retry.go:31] will retry after 1.782791116s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.977625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-677000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:06:35.333631    1742 retry.go:31] will retry after 1.181082465s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.262875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-677000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:06:36.625442    1742 retry.go:31] will retry after 3.449279754s: failed to retrieve Pod IPs (may be temporary): exit status 1
E1209 16:06:39.918038    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.377667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-677000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:06:40.185229    1742 retry.go:31] will retry after 5.863399244s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.955333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-677000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:06:46.157963    1742 retry.go:31] will retry after 10.941104538s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.717875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-677000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:06:57.209316    1742 retry.go:31] will retry after 10.965195117s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.891583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-677000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:07:08.284842    1742 retry.go:31] will retry after 23.076016083s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.857042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-677000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:07:31.472341    1742 retry.go:31] will retry after 21.918473115s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.138542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-677000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:07:53.501576    1742 retry.go:31] will retry after 27.980001988s: failed to retrieve Pod IPs (may be temporary): exit status 1
E1209 16:08:03.009474    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.480708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-677000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.009875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-677000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- exec  -- nslookup kubernetes.io: exit status 1 (61.605208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-677000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- exec  -- nslookup kubernetes.default: exit status 1 (62.207458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-677000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.349209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-677000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000: exit status 7 (34.904875ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 16:08:21.876445    3238 status.go:393] failed to get driver ip: parsing IP: 
	E1209 16:08:21.876453    3238 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-677000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DeployApp (110.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-677000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.453375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-677000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000: exit status 7 (34.873125ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 16:08:21.974159    3243 status.go:393] failed to get driver ip: parsing IP: 
	E1209 16:08:21.974164    3243 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-677000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-677000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-677000 -v=7 --alsologtostderr: exit status 50 (50.980583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:08:22.007595    3245 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:08:22.007870    3245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:08:22.007873    3245 out.go:358] Setting ErrFile to fd 2...
	I1209 16:08:22.007876    3245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:08:22.008025    3245 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:08:22.008268    3245 mustload.go:65] Loading cluster: ha-677000
	I1209 16:08:22.008479    3245 config.go:182] Loaded profile config "ha-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:08:22.009153    3245 host.go:66] Checking if "ha-677000" exists ...
	I1209 16:08:22.013283    3245 out.go:201] 
	W1209 16:08:22.017249    3245 out.go:270] X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-677000 endpoint: failed to lookup ip for ""
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-677000 endpoint: failed to lookup ip for ""
	W1209 16:08:22.017268    3245 out.go:270] * Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	I1209 16:08:22.021167    3245 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-677000 -v=7 --alsologtostderr" : exit status 50
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000: exit status 7 (34.98125ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 16:08:22.060414    3247 status.go:393] failed to get driver ip: parsing IP: 
	E1209 16:08:22.060419    3247 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-677000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-677000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-677000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.013333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-677000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-677000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-677000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000: exit status 7 (34.637708ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 16:08:22.122472    3250 status.go:393] failed to get driver ip: parsing IP: 
	E1209 16:08:22.122481    3250 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-677000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-677000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-677000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-677000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-677000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-677000" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-677000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-677000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-677000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000: exit status 7 (34.967958ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 16:08:22.209350    3255 status.go:393] failed to get driver ip: parsing IP: 
	E1209 16:08:22.209355    3255 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-677000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-677000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-677000 node stop m02 -v=7 --alsologtostderr: exit status 85 (50.849375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:08:22.278517    3259 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:08:22.278805    3259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:08:22.278808    3259 out.go:358] Setting ErrFile to fd 2...
	I1209 16:08:22.278810    3259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:08:22.278954    3259 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:08:22.279231    3259 mustload.go:65] Loading cluster: ha-677000
	I1209 16:08:22.279453    3259 config.go:182] Loaded profile config "ha-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:08:22.284077    3259 out.go:201] 
	W1209 16:08:22.287282    3259 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1209 16:08:22.287288    3259 out.go:270] * 
	* 
	W1209 16:08:22.288704    3259 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:08:22.293239    3259 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-677000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-677000 status -v=7 --alsologtostderr
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-677000 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-677000 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-677000 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-677000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000: exit status 7 (34.405ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 16:08:22.366076    3263 status.go:393] failed to get driver ip: parsing IP: 
	E1209 16:08:22.366085    3263 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-677000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-677000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-677000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-677000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-677000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000: exit status 7 (34.621708ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 16:08:22.453125    3268 status.go:393] failed to get driver ip: parsing IP: 
	E1209 16:08:22.453130    3268 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-677000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (0.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-677000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-677000 node start m02 -v=7 --alsologtostderr: exit status 85 (51.875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:08:22.487163    3270 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:08:22.487457    3270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:08:22.487460    3270 out.go:358] Setting ErrFile to fd 2...
	I1209 16:08:22.487462    3270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:08:22.487605    3270 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:08:22.487840    3270 mustload.go:65] Loading cluster: ha-677000
	I1209 16:08:22.488038    3270 config.go:182] Loaded profile config "ha-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:08:22.492284    3270 out.go:201] 
	W1209 16:08:22.495256    3270 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1209 16:08:22.495261    3270 out.go:270] * 
	* 
	W1209 16:08:22.496728    3270 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:08:22.501094    3270 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1209 16:08:22.487163    3270 out.go:345] Setting OutFile to fd 1 ...
I1209 16:08:22.487457    3270 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 16:08:22.487460    3270 out.go:358] Setting ErrFile to fd 2...
I1209 16:08:22.487462    3270 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 16:08:22.487605    3270 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
I1209 16:08:22.487840    3270 mustload.go:65] Loading cluster: ha-677000
I1209 16:08:22.488038    3270 config.go:182] Loaded profile config "ha-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 16:08:22.492284    3270 out.go:201] 
W1209 16:08:22.495256    3270 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1209 16:08:22.495261    3270 out.go:270] * 
* 
W1209 16:08:22.496728    3270 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1209 16:08:22.501094    3270 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-677000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-677000 status -v=7 --alsologtostderr
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-677000 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-677000 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-677000 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-677000 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
ha_test.go:450: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (33.782791ms)

                                                
                                                
** stderr ** 
	E1209 16:08:22.571618    3274 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1209 16:08:22.572143    3274 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1209 16:08:22.573263    3274 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1209 16:08:22.573719    3274 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1209 16:08:22.574876    3274 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	The connection to the server localhost:8080 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
ha_test.go:452: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000: exit status 7 (35.223458ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 16:08:22.610053    3275 status.go:393] failed to get driver ip: parsing IP: 
	E1209 16:08:22.610059    3275 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-677000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (0.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-677000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-677000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-677000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-677000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-677000" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-677000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-677000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-677000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000: exit status 7 (35.09675ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 16:08:22.705469    3280 status.go:393] failed to get driver ip: parsing IP: 
	E1209 16:08:22.705482    3280 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-677000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (963.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-677000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-677000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-677000 -v=7 --alsologtostderr: (5.113027708s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-677000 --wait=true -v=7 --alsologtostderr
E1209 16:08:30.972144    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:11:39.918361    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:13:30.972836    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:14:54.068930    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:16:39.919008    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:18:30.973532    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:21:39.919744    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:23:30.974208    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-677000 --wait=true -v=7 --alsologtostderr: signal: killed (15m58.44385925s)

                                                
                                                
-- stdout --
	* [ha-677000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-677000" primary control-plane node in "ha-677000" cluster
	* Restarting existing qemu2 VM for "ha-677000" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:08:27.919425    3309 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:08:27.919623    3309 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:08:27.919627    3309 out.go:358] Setting ErrFile to fd 2...
	I1209 16:08:27.919630    3309 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:08:27.919790    3309 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:08:27.921024    3309 out.go:352] Setting JSON to false
	I1209 16:08:27.941563    3309 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2277,"bootTime":1733787030,"procs":540,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:08:27.941652    3309 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:08:27.945031    3309 out.go:177] * [ha-677000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:08:27.952896    3309 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:08:27.952941    3309 notify.go:220] Checking for updates...
	I1209 16:08:27.961055    3309 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:08:27.963972    3309 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:08:27.966997    3309 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:08:27.970013    3309 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:08:27.972949    3309 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:08:27.976248    3309 config.go:182] Loaded profile config "ha-677000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:08:27.976306    3309 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:08:27.981025    3309 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 16:08:27.987990    3309 start.go:297] selected driver: qemu2
	I1209 16:08:27.987997    3309 start.go:901] validating driver "qemu2" against &{Name:ha-677000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-677000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:08:27.988042    3309 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:08:27.990413    3309 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:08:27.990436    3309 cni.go:84] Creating CNI manager for ""
	I1209 16:08:27.990458    3309 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 16:08:27.990506    3309 start.go:340] cluster config:
	{Name:ha-677000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-677000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:08:27.994742    3309 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:08:28.001992    3309 out.go:177] * Starting "ha-677000" primary control-plane node in "ha-677000" cluster
	I1209 16:08:28.006016    3309 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:08:28.006032    3309 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:08:28.006044    3309 cache.go:56] Caching tarball of preloaded images
	I1209 16:08:28.006120    3309 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:08:28.006125    3309 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:08:28.006181    3309 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/ha-677000/config.json ...
	I1209 16:08:28.006662    3309 start.go:360] acquireMachinesLock for ha-677000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:08:28.006706    3309 start.go:364] duration metric: took 38.833µs to acquireMachinesLock for "ha-677000"
	I1209 16:08:28.006714    3309 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:08:28.006719    3309 fix.go:54] fixHost starting: 
	I1209 16:08:28.006838    3309 fix.go:112] recreateIfNeeded on ha-677000: state=Stopped err=<nil>
	W1209 16:08:28.006847    3309 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:08:28.014964    3309 out.go:177] * Restarting existing qemu2 VM for "ha-677000" ...
	I1209 16:08:28.018967    3309 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:08:28.019003    3309 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:d8:ba:66:76:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/ha-677000/disk.qcow2
	I1209 16:08:28.057205    3309 main.go:141] libmachine: STDOUT: 
	I1209 16:08:28.057233    3309 main.go:141] libmachine: STDERR: 
	I1209 16:08:28.057237    3309 main.go:141] libmachine: Attempt 0
	I1209 16:08:28.057258    3309 main.go:141] libmachine: Searching for d2:d8:ba:66:76:a0 in /var/db/dhcpd_leases ...
	I1209 16:08:28.057342    3309 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I1209 16:08:28.057360    3309 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:d2:d8:ba:66:76:a0 ID:1,d2:d8:ba:66:76:a0 Lease:0x67578679}
	I1209 16:08:28.057366    3309 main.go:141] libmachine: Found match: d2:d8:ba:66:76:a0
	I1209 16:08:28.057373    3309 main.go:141] libmachine: IP: 192.168.105.6
	I1209 16:08:28.057377    3309 main.go:141] libmachine: Waiting for VM to start (ssh -p 0 docker@192.168.105.6)...

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-677000 -v=7 --alsologtostderr" : signal: killed
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-677000
ha_test.go:474: (dbg) Non-zero exit: out/minikube-darwin-arm64 node list -p ha-677000: context deadline exceeded (417ns)
ha_test.go:476: failed to run node list. args "out/minikube-darwin-arm64 node list -p ha-677000" : context deadline exceeded
ha_test.go:481: reported node list is not the same after restart. Before restart: ha-677000	

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-677000 -n ha-677000: exit status 7 (37.169ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 16:24:26.335936    3543 status.go:393] failed to get driver ip: parsing IP: 
	E1209 16:24:26.335945    3543 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-677000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (963.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (725.27s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-256000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E1209 16:26:39.920991    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:28:30.975470    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:31:34.075127    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:31:39.921369    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:33:30.960118    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:36:39.905036    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-256000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 52 (12m5.270970375s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3e4e3be3-4d2b-46d2-9b9f-39b3b523a6e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-256000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5676c355-8648-407c-8940-0cfbe692fd09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20062"}}
	{"specversion":"1.0","id":"de123cfb-eacb-43ff-9dd4-a743899e4a65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig"}}
	{"specversion":"1.0","id":"f1e7d124-7031-426f-a026-79186eabf287","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b921e378-7f43-4df5-bf8b-216b3840423e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"656008d4-2023-4c74-9c12-80f863dd2f81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube"}}
	{"specversion":"1.0","id":"5bbfaa1d-2076-4102-9aca-345abfbfebfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"63e1c597-17b7-4634-9441-32397049991b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"76b246b3-d490-49ea-92ee-6142f665d2a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"88a941a0-49c7-4e3e-aefc-ee0e79e1c365","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-256000\" primary control-plane node in \"json-output-256000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f371c5b-137e-4612-bb02-0c1fc0c5e4fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"c3e96879-f4d7-438f-88f5-c15f2111c74d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-256000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"25ee584d-2a3d-46e5-b4b7-8308ba28d567","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"}}
	{"specversion":"1.0","id":"08e4ea74-c564-4648-91be-8accaa3bba67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"d641f090-b8a9-4af7-a256-b4220882ceb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-256000\" may fix it: creating host: create host timed out in 360.000000 seconds"}}
	{"specversion":"1.0","id":"5f8e127e-539e-4a93-bd9d-cdd2ce2a71e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try 'minikube delete', and disable any conflicting VPN or firewall software","exitcode":"52","issues":"https://github.com/kubernetes/minikube/issues/7072","message":"Failed to start host: creating host: create host timed out in 360.000000 seconds","name":"DRV_CREATE_TIMEOUT","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-256000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 52
--- FAIL: TestJSONOutput/start/Command (725.27s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 9 has already been assigned to another step:
Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
Cannot use for:
Deleting "json-output-256000" in qemu2 ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3e4e3be3-4d2b-46d2-9b9f-39b3b523a6e0
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-256000] minikube v1.34.0 on Darwin 15.0.1 (arm64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 5676c355-8648-407c-8940-0cfbe692fd09
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=20062"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: de123cfb-eacb-43ff-9dd4-a743899e4a65
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: f1e7d124-7031-426f-a026-79186eabf287
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-arm64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b921e378-7f43-4df5-bf8b-216b3840423e
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 656008d4-2023-4c74-9c12-80f863dd2f81
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 5bbfaa1d-2076-4102-9aca-345abfbfebfc
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 63e1c597-17b7-4634-9441-32397049991b
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the qemu2 driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 76b246b3-d490-49ea-92ee-6142f665d2a3
datacontenttype: application/json
Data,
{
"message": "Automatically selected the socket_vmnet network"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 88a941a0-49c7-4e3e-aefc-ee0e79e1c365
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-256000\" primary control-plane node in \"json-output-256000\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8f371c5b-137e-4612-bb02-0c1fc0c5e4fa
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: c3e96879-f4d7-438f-88f5-c15f2111c74d
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-256000\" in qemu2 ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 25ee584d-2a3d-46e5-b4b7-8308ba28d567
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 08e4ea74-c564-4648-91be-8accaa3bba67
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: d641f090-b8a9-4af7-a256-b4220882ceb7
datacontenttype: application/json
Data,
{
"message": "Failed to start qemu2 VM. Running \"minikube delete -p json-output-256000\" may fix it: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 5f8e127e-539e-4a93-bd9d-cdd2ce2a71e6
datacontenttype: application/json
Data,
{
"advice": "Try 'minikube delete', and disable any conflicting VPN or firewall software",
"exitcode": "52",
"issues": "https://github.com/kubernetes/minikube/issues/7072",
"message": "Failed to start host: creating host: create host timed out in 360.000000 seconds",
"name": "DRV_CREATE_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3e4e3be3-4d2b-46d2-9b9f-39b3b523a6e0
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-256000] minikube v1.34.0 on Darwin 15.0.1 (arm64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 5676c355-8648-407c-8940-0cfbe692fd09
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=20062"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: de123cfb-eacb-43ff-9dd4-a743899e4a65
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: f1e7d124-7031-426f-a026-79186eabf287
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-arm64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b921e378-7f43-4df5-bf8b-216b3840423e
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 656008d4-2023-4c74-9c12-80f863dd2f81
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 5bbfaa1d-2076-4102-9aca-345abfbfebfc
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 63e1c597-17b7-4634-9441-32397049991b
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the qemu2 driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 76b246b3-d490-49ea-92ee-6142f665d2a3
datacontenttype: application/json
Data,
{
"message": "Automatically selected the socket_vmnet network"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 88a941a0-49c7-4e3e-aefc-ee0e79e1c365
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-256000\" primary control-plane node in \"json-output-256000\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8f371c5b-137e-4612-bb02-0c1fc0c5e4fa
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: c3e96879-f4d7-438f-88f5-c15f2111c74d
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-256000\" in qemu2 ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 25ee584d-2a3d-46e5-b4b7-8308ba28d567
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 08e4ea74-c564-4648-91be-8accaa3bba67
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: d641f090-b8a9-4af7-a256-b4220882ceb7
datacontenttype: application/json
Data,
{
"message": "Failed to start qemu2 VM. Running \"minikube delete -p json-output-256000\" may fix it: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 5f8e127e-539e-4a93-bd9d-cdd2ce2a71e6
datacontenttype: application/json
Data,
{
"advice": "Try 'minikube delete', and disable any conflicting VPN or firewall software",
"exitcode": "52",
"issues": "https://github.com/kubernetes/minikube/issues/7072",
"message": "Failed to start host: creating host: create host timed out in 360.000000 seconds",
"name": "DRV_CREATE_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-256000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-256000 --output=json --user=testUser: exit status 50 (88.26375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bb125946-f300-49a9-b4d3-1d917628bf95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Recreate the cluster by running:\n\t\tminikube delete {{.profileArg}}\n\t\tminikube start {{.profileArg}}","exitcode":"50","issues":"","message":"Unable to get control-plane node json-output-256000 endpoint: failed to lookup ip for \"\"","name":"DRV_CP_ENDPOINT","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-256000 --output=json --user=testUser": exit status 50
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.06s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-256000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-256000 --output=json --user=testUser: exit status 50 (60.277041ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node json-output-256000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-256000 --output=json --user=testUser": exit status 50
--- FAIL: TestJSONOutput/unpause/Command (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (190.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-878000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-878000 --driver=qemu2 : (34.327552292s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-879000 --driver=qemu2 
E1209 16:38:30.958887    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p second-879000 --driver=qemu2 : exit status 90 (1m21.123563875s)

                                                
                                                
-- stdout --
	* [second-879000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "second-879000" primary control-plane node in "second-879000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Dec 10 00:38:08 second-879000 systemd[1]: Starting Docker Application Container Engine...
	Dec 10 00:38:08 second-879000 dockerd[529]: time="2024-12-10T00:38:08.597020169Z" level=info msg="Starting up"
	Dec 10 00:38:08 second-879000 dockerd[529]: time="2024-12-10T00:38:08.597459502Z" level=info msg="containerd not running, starting managed containerd"
	Dec 10 00:38:08 second-879000 dockerd[529]: time="2024-12-10T00:38:08.598310711Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=537
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.613889127Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.622426711Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.622441211Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.622461294Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.622467544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.622492044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.622504627Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.622578044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.622606752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.622613127Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.622617336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.622641877Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.622725169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.623287544Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.623296086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.623349002Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.623354419Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.623381336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.623397086Z" level=info msg="metadata content store policy set" policy=shared
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628330294Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628353336Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628360336Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628366461Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628372836Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628414711Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628531836Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628574836Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628580711Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628586502Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628592752Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628597961Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628604919Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628610294Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628616377Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628622044Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628627294Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628632002Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628641252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628647419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628652711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628658461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628663336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628668794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628673794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628679086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628684711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628690669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628695169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628700002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628705211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628713169Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628722169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628727002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628731711Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628783419Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628798419Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628803127Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628808377Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628812461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628818002Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628822502Z" level=info msg="NRI interface is disabled by configuration."
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628940711Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628960794Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628972169Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Dec 10 00:38:08 second-879000 dockerd[537]: time="2024-12-10T00:38:08.628979211Z" level=info msg="containerd successfully booted in 0.015427s"
	Dec 10 00:38:09 second-879000 dockerd[529]: time="2024-12-10T00:38:09.636872170Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 10 00:38:09 second-879000 dockerd[529]: time="2024-12-10T00:38:09.645442545Z" level=info msg="Loading containers: start."
	Dec 10 00:38:09 second-879000 dockerd[529]: time="2024-12-10T00:38:09.687487503Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Dec 10 00:38:09 second-879000 dockerd[529]: time="2024-12-10T00:38:09.718014170Z" level=info msg="Loading containers: done."
	Dec 10 00:38:09 second-879000 dockerd[529]: time="2024-12-10T00:38:09.724523086Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Dec 10 00:38:09 second-879000 dockerd[529]: time="2024-12-10T00:38:09.724536836Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Dec 10 00:38:09 second-879000 dockerd[529]: time="2024-12-10T00:38:09.724548836Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Dec 10 00:38:09 second-879000 dockerd[529]: time="2024-12-10T00:38:09.724584045Z" level=info msg="Daemon has completed initialization"
	Dec 10 00:38:09 second-879000 dockerd[529]: time="2024-12-10T00:38:09.738298670Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 10 00:38:09 second-879000 dockerd[529]: time="2024-12-10T00:38:09.738336545Z" level=info msg="API listen on [::]:2376"
	Dec 10 00:38:09 second-879000 systemd[1]: Started Docker Application Container Engine.
	Dec 10 00:38:10 second-879000 systemd[1]: Stopping Docker Application Container Engine...
	Dec 10 00:38:10 second-879000 dockerd[529]: time="2024-12-10T00:38:10.430240212Z" level=info msg="Processing signal 'terminated'"
	Dec 10 00:38:10 second-879000 dockerd[529]: time="2024-12-10T00:38:10.430738337Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Dec 10 00:38:10 second-879000 dockerd[529]: time="2024-12-10T00:38:10.430835920Z" level=info msg="Daemon shutdown complete"
	Dec 10 00:38:10 second-879000 dockerd[529]: time="2024-12-10T00:38:10.430873337Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Dec 10 00:38:10 second-879000 dockerd[529]: time="2024-12-10T00:38:10.430882212Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Dec 10 00:38:11 second-879000 systemd[1]: docker.service: Deactivated successfully.
	Dec 10 00:38:11 second-879000 systemd[1]: Stopped Docker Application Container Engine.
	Dec 10 00:38:11 second-879000 systemd[1]: Starting Docker Application Container Engine...
	Dec 10 00:38:11 second-879000 dockerd[933]: time="2024-12-10T00:38:11.468249462Z" level=info msg="Starting up"
	Dec 10 00:39:11 second-879000 dockerd[933]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Dec 10 00:39:11 second-879000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Dec 10 00:39:11 second-879000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Dec 10 00:39:11 second-879000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p second-879000 --driver=qemu2 ": exit status 90
panic.go:629: *** TestMinikubeProfile FAILED at 2024-12-09 16:39:11.540695 -0800 PST m=+3383.732635084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-879000 -n second-879000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-879000 -n second-879000: exit status 6 (103.28025ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 16:39:11.638152    4254 status.go:458] kubeconfig endpoint: get endpoint: "second-879000" does not appear in /Users/jenkins/minikube-integration/20062-1231/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "second-879000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "second-879000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-879000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-12-09 16:39:11.786119 -0800 PST m=+3383.978060334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-878000 -n first-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-878000 -n first-878000: exit status 3 (1m15.054998334s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1209 16:40:26.836171    4260 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.10:22: connect: operation timed out
	E1209 16:40:26.836205    4260 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.105.10:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "first-878000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "first-878000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-878000
--- FAIL: TestMinikubeProfile (190.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-475000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-475000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.989386208s)

                                                
                                                
-- stdout --
	* [mount-start-1-475000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-475000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-475000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-475000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-475000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-475000 -n mount-start-1-475000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-475000 -n mount-start-1-475000: exit status 7 (73.592875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-475000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.06s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-350000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-350000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.875991334s)

                                                
                                                
-- stdout --
	* [multinode-350000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-350000" primary control-plane node in "multinode-350000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-350000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:40:37.293254    4331 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:40:37.293404    4331 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:40:37.293407    4331 out.go:358] Setting ErrFile to fd 2...
	I1209 16:40:37.293410    4331 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:40:37.293548    4331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:40:37.294669    4331 out.go:352] Setting JSON to false
	I1209 16:40:37.312331    4331 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4207,"bootTime":1733787030,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:40:37.312398    4331 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:40:37.319444    4331 out.go:177] * [multinode-350000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:40:37.327422    4331 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:40:37.327498    4331 notify.go:220] Checking for updates...
	I1209 16:40:37.336452    4331 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:40:37.339341    4331 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:40:37.343323    4331 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:40:37.346425    4331 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:40:37.349346    4331 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:40:37.352601    4331 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:40:37.357372    4331 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:40:37.364365    4331 start.go:297] selected driver: qemu2
	I1209 16:40:37.364379    4331 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:40:37.364386    4331 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:40:37.366921    4331 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:40:37.371418    4331 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:40:37.374473    4331 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:40:37.374502    4331 cni.go:84] Creating CNI manager for ""
	I1209 16:40:37.374524    4331 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1209 16:40:37.374535    4331 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 16:40:37.374572    4331 start.go:340] cluster config:
	{Name:multinode-350000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-350000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:40:37.379530    4331 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:40:37.387298    4331 out.go:177] * Starting "multinode-350000" primary control-plane node in "multinode-350000" cluster
	I1209 16:40:37.391389    4331 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:40:37.391406    4331 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:40:37.391429    4331 cache.go:56] Caching tarball of preloaded images
	I1209 16:40:37.391505    4331 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:40:37.391510    4331 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:40:37.391716    4331 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/multinode-350000/config.json ...
	I1209 16:40:37.391728    4331 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/multinode-350000/config.json: {Name:mk8f8937d14427ca2b544b65428c3a88634b6d19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:40:37.392225    4331 start.go:360] acquireMachinesLock for multinode-350000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:40:37.392276    4331 start.go:364] duration metric: took 44.916µs to acquireMachinesLock for "multinode-350000"
	I1209 16:40:37.392288    4331 start.go:93] Provisioning new machine with config: &{Name:multinode-350000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-350000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:40:37.392316    4331 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:40:37.398355    4331 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 16:40:37.416598    4331 start.go:159] libmachine.API.Create for "multinode-350000" (driver="qemu2")
	I1209 16:40:37.416623    4331 client.go:168] LocalClient.Create starting
	I1209 16:40:37.416689    4331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:40:37.416727    4331 main.go:141] libmachine: Decoding PEM data...
	I1209 16:40:37.416753    4331 main.go:141] libmachine: Parsing certificate...
	I1209 16:40:37.416789    4331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:40:37.416824    4331 main.go:141] libmachine: Decoding PEM data...
	I1209 16:40:37.416833    4331 main.go:141] libmachine: Parsing certificate...
	I1209 16:40:37.417271    4331 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:40:37.578324    4331 main.go:141] libmachine: Creating SSH key...
	I1209 16:40:37.604832    4331 main.go:141] libmachine: Creating Disk image...
	I1209 16:40:37.604837    4331 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:40:37.605027    4331 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/disk.qcow2
	I1209 16:40:37.614825    4331 main.go:141] libmachine: STDOUT: 
	I1209 16:40:37.614856    4331 main.go:141] libmachine: STDERR: 
	I1209 16:40:37.614913    4331 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/disk.qcow2 +20000M
	I1209 16:40:37.623266    4331 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:40:37.623286    4331 main.go:141] libmachine: STDERR: 
	I1209 16:40:37.623317    4331 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/disk.qcow2
	I1209 16:40:37.623324    4331 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:40:37.623334    4331 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:40:37.623359    4331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:ff:30:0a:8a:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/disk.qcow2
	I1209 16:40:37.625178    4331 main.go:141] libmachine: STDOUT: 
	I1209 16:40:37.625193    4331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:40:37.625212    4331 client.go:171] duration metric: took 208.583125ms to LocalClient.Create
	I1209 16:40:39.627384    4331 start.go:128] duration metric: took 2.235054833s to createHost
	I1209 16:40:39.627489    4331 start.go:83] releasing machines lock for "multinode-350000", held for 2.235188s
	W1209 16:40:39.627561    4331 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:40:39.641634    4331 out.go:177] * Deleting "multinode-350000" in qemu2 ...
	W1209 16:40:39.672587    4331 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:40:39.672632    4331 start.go:729] Will try again in 5 seconds ...
	I1209 16:40:44.674853    4331 start.go:360] acquireMachinesLock for multinode-350000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:40:44.675451    4331 start.go:364] duration metric: took 494.333µs to acquireMachinesLock for "multinode-350000"
	I1209 16:40:44.675590    4331 start.go:93] Provisioning new machine with config: &{Name:multinode-350000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-350000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:40:44.675908    4331 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:40:44.694590    4331 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 16:40:44.746467    4331 start.go:159] libmachine.API.Create for "multinode-350000" (driver="qemu2")
	I1209 16:40:44.746600    4331 client.go:168] LocalClient.Create starting
	I1209 16:40:44.746759    4331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:40:44.746851    4331 main.go:141] libmachine: Decoding PEM data...
	I1209 16:40:44.746870    4331 main.go:141] libmachine: Parsing certificate...
	I1209 16:40:44.746952    4331 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:40:44.747037    4331 main.go:141] libmachine: Decoding PEM data...
	I1209 16:40:44.747056    4331 main.go:141] libmachine: Parsing certificate...
	I1209 16:40:44.748112    4331 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:40:44.919618    4331 main.go:141] libmachine: Creating SSH key...
	I1209 16:40:45.058892    4331 main.go:141] libmachine: Creating Disk image...
	I1209 16:40:45.058899    4331 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:40:45.059106    4331 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/disk.qcow2
	I1209 16:40:45.069301    4331 main.go:141] libmachine: STDOUT: 
	I1209 16:40:45.069321    4331 main.go:141] libmachine: STDERR: 
	I1209 16:40:45.069399    4331 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/disk.qcow2 +20000M
	I1209 16:40:45.077939    4331 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:40:45.077956    4331 main.go:141] libmachine: STDERR: 
	I1209 16:40:45.077968    4331 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/disk.qcow2
	I1209 16:40:45.077973    4331 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:40:45.077983    4331 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:40:45.078020    4331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:5d:dd:27:0d:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/disk.qcow2
	I1209 16:40:45.079761    4331 main.go:141] libmachine: STDOUT: 
	I1209 16:40:45.079775    4331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:40:45.079787    4331 client.go:171] duration metric: took 333.182667ms to LocalClient.Create
	I1209 16:40:47.082079    4331 start.go:128] duration metric: took 2.406142708s to createHost
	I1209 16:40:47.082160    4331 start.go:83] releasing machines lock for "multinode-350000", held for 2.406693583s
	W1209 16:40:47.082568    4331 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-350000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-350000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:40:47.099083    4331 out.go:201] 
	W1209 16:40:47.103355    4331 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:40:47.103380    4331 out.go:270] * 
	* 
	W1209 16:40:47.106205    4331 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:40:47.122208    4331 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-350000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000: exit status 7 (72.933709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-350000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (78.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-350000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-350000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (130.793209ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-350000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-350000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-350000 -- rollout status deployment/busybox: exit status 1 (64.504541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-350000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.172917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-350000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:40:47.469801    1742 retry.go:31] will retry after 1.191262323s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.485125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-350000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:40:48.767949    1742 retry.go:31] will retry after 1.676605144s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.334125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-350000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:40:50.554374    1742 retry.go:31] will retry after 2.550741338s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.515167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-350000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:40:53.215041    1742 retry.go:31] will retry after 4.816491829s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.448667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-350000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:40:58.142380    1742 retry.go:31] will retry after 4.883020277s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.017166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-350000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:41:03.135842    1742 retry.go:31] will retry after 10.022139275s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.546916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-350000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:41:13.268939    1742 retry.go:31] will retry after 8.414483071s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.622458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-350000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:41:21.796525    1742 retry.go:31] will retry after 23.424514507s: failed to retrieve Pod IPs (may be temporary): exit status 1
E1209 16:41:23.002693    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:41:39.904025    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.105208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-350000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1209 16:41:45.331527    1742 retry.go:31] will retry after 19.949047918s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.926958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-350000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.672583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-350000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-350000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-350000 -- exec  -- nslookup kubernetes.io: exit status 1 (61.736833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-350000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-350000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-350000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.606792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-350000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-350000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-350000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (61.319208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-350000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000: exit status 7 (34.438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-350000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (78.46s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-350000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.00825ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-350000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000: exit status 7 (34.217834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-350000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-350000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-350000 -v 3 --alsologtostderr: exit status 83 (45.953125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-350000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-350000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:42:05.800523    4453 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:42:05.800900    4453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:05.800903    4453 out.go:358] Setting ErrFile to fd 2...
	I1209 16:42:05.800906    4453 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:05.801068    4453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:42:05.801284    4453 mustload.go:65] Loading cluster: multinode-350000
	I1209 16:42:05.801514    4453 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:42:05.805052    4453 out.go:177] * The control-plane node multinode-350000 host is not running: state=Stopped
	I1209 16:42:05.809097    4453 out.go:177]   To start a cluster, run: "minikube start -p multinode-350000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-350000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000: exit status 7 (34.703667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-350000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-350000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-350000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (97.768791ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-350000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-350000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-350000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000: exit status 7 (34.484166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-350000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-350000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-350000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-350000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"multinode-350000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000: exit status 7 (33.525416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-350000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 status --output json --alsologtostderr: exit status 7 (33.984417ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-350000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:42:06.100970    4465 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:42:06.101168    4465 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:06.101171    4465 out.go:358] Setting ErrFile to fd 2...
	I1209 16:42:06.101174    4465 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:06.101292    4465 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:42:06.101414    4465 out.go:352] Setting JSON to true
	I1209 16:42:06.101424    4465 mustload.go:65] Loading cluster: multinode-350000
	I1209 16:42:06.101473    4465 notify.go:220] Checking for updates...
	I1209 16:42:06.101637    4465 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:42:06.101645    4465 status.go:174] checking status of multinode-350000 ...
	I1209 16:42:06.101876    4465 status.go:371] multinode-350000 host status = "Stopped" (err=<nil>)
	I1209 16:42:06.101880    4465 status.go:384] host is not running, skipping remaining checks
	I1209 16:42:06.101882    4465 status.go:176] multinode-350000 status: &{Name:multinode-350000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-350000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000: exit status 7 (34.163208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-350000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 node stop m03: exit status 85 (52.379875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-350000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 status: exit status 7 (34.043792ms)

                                                
                                                
-- stdout --
	multinode-350000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 status --alsologtostderr: exit status 7 (35.033042ms)

                                                
                                                
-- stdout --
	multinode-350000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:42:06.257561    4473 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:42:06.257728    4473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:06.257731    4473 out.go:358] Setting ErrFile to fd 2...
	I1209 16:42:06.257733    4473 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:06.257863    4473 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:42:06.257977    4473 out.go:352] Setting JSON to false
	I1209 16:42:06.257987    4473 mustload.go:65] Loading cluster: multinode-350000
	I1209 16:42:06.258058    4473 notify.go:220] Checking for updates...
	I1209 16:42:06.258196    4473 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:42:06.258204    4473 status.go:174] checking status of multinode-350000 ...
	I1209 16:42:06.258450    4473 status.go:371] multinode-350000 host status = "Stopped" (err=<nil>)
	I1209 16:42:06.258454    4473 status.go:384] host is not running, skipping remaining checks
	I1209 16:42:06.258456    4473 status.go:176] multinode-350000 status: &{Name:multinode-350000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-350000 status --alsologtostderr": multinode-350000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000: exit status 7 (34.041041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-350000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (48.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 node start m03 -v=7 --alsologtostderr: exit status 85 (49.361792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:42:06.325427    4477 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:42:06.325721    4477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:06.325724    4477 out.go:358] Setting ErrFile to fd 2...
	I1209 16:42:06.325726    4477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:06.325853    4477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:42:06.326103    4477 mustload.go:65] Loading cluster: multinode-350000
	I1209 16:42:06.326303    4477 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:42:06.331105    4477 out.go:201] 
	W1209 16:42:06.332329    4477 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1209 16:42:06.332335    4477 out.go:270] * 
	* 
	W1209 16:42:06.333744    4477 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:42:06.337016    4477 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1209 16:42:06.325427    4477 out.go:345] Setting OutFile to fd 1 ...
I1209 16:42:06.325721    4477 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 16:42:06.325724    4477 out.go:358] Setting ErrFile to fd 2...
I1209 16:42:06.325726    4477 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 16:42:06.325853    4477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
I1209 16:42:06.326103    4477 mustload.go:65] Loading cluster: multinode-350000
I1209 16:42:06.326303    4477 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 16:42:06.331105    4477 out.go:201] 
W1209 16:42:06.332329    4477 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1209 16:42:06.332335    4477 out.go:270] * 
* 
W1209 16:42:06.333744    4477 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1209 16:42:06.337016    4477 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-350000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr: exit status 7 (34.164958ms)

                                                
                                                
-- stdout --
	multinode-350000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:42:06.375540    4479 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:42:06.375723    4479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:06.375726    4479 out.go:358] Setting ErrFile to fd 2...
	I1209 16:42:06.375728    4479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:06.375849    4479 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:42:06.375981    4479 out.go:352] Setting JSON to false
	I1209 16:42:06.375991    4479 mustload.go:65] Loading cluster: multinode-350000
	I1209 16:42:06.376047    4479 notify.go:220] Checking for updates...
	I1209 16:42:06.376215    4479 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:42:06.376222    4479 status.go:174] checking status of multinode-350000 ...
	I1209 16:42:06.376463    4479 status.go:371] multinode-350000 host status = "Stopped" (err=<nil>)
	I1209 16:42:06.376466    4479 status.go:384] host is not running, skipping remaining checks
	I1209 16:42:06.376468    4479 status.go:176] multinode-350000 status: &{Name:multinode-350000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 16:42:06.377359    1742 retry.go:31] will retry after 957.21081ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr: exit status 7 (78.901458ms)

                                                
                                                
-- stdout --
	multinode-350000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:42:07.413823    4481 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:42:07.414040    4481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:07.414044    4481 out.go:358] Setting ErrFile to fd 2...
	I1209 16:42:07.414047    4481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:07.414225    4481 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:42:07.414380    4481 out.go:352] Setting JSON to false
	I1209 16:42:07.414392    4481 mustload.go:65] Loading cluster: multinode-350000
	I1209 16:42:07.414429    4481 notify.go:220] Checking for updates...
	I1209 16:42:07.414649    4481 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:42:07.414658    4481 status.go:174] checking status of multinode-350000 ...
	I1209 16:42:07.414939    4481 status.go:371] multinode-350000 host status = "Stopped" (err=<nil>)
	I1209 16:42:07.414943    4481 status.go:384] host is not running, skipping remaining checks
	I1209 16:42:07.414946    4481 status.go:176] multinode-350000 status: &{Name:multinode-350000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 16:42:07.415935    1742 retry.go:31] will retry after 1.385547623s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr: exit status 7 (77.884375ms)

                                                
                                                
-- stdout --
	multinode-350000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:42:08.879635    4483 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:42:08.879840    4483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:08.879844    4483 out.go:358] Setting ErrFile to fd 2...
	I1209 16:42:08.879847    4483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:08.880008    4483 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:42:08.880195    4483 out.go:352] Setting JSON to false
	I1209 16:42:08.880216    4483 mustload.go:65] Loading cluster: multinode-350000
	I1209 16:42:08.880248    4483 notify.go:220] Checking for updates...
	I1209 16:42:08.880440    4483 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:42:08.880450    4483 status.go:174] checking status of multinode-350000 ...
	I1209 16:42:08.880753    4483 status.go:371] multinode-350000 host status = "Stopped" (err=<nil>)
	I1209 16:42:08.880758    4483 status.go:384] host is not running, skipping remaining checks
	I1209 16:42:08.880760    4483 status.go:176] multinode-350000 status: &{Name:multinode-350000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 16:42:08.881760    1742 retry.go:31] will retry after 1.511733514s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr: exit status 7 (76.853958ms)

                                                
                                                
-- stdout --
	multinode-350000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:42:10.470737    4488 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:42:10.470920    4488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:10.470924    4488 out.go:358] Setting ErrFile to fd 2...
	I1209 16:42:10.470926    4488 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:10.471091    4488 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:42:10.471238    4488 out.go:352] Setting JSON to false
	I1209 16:42:10.471250    4488 mustload.go:65] Loading cluster: multinode-350000
	I1209 16:42:10.471287    4488 notify.go:220] Checking for updates...
	I1209 16:42:10.471498    4488 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:42:10.471507    4488 status.go:174] checking status of multinode-350000 ...
	I1209 16:42:10.471823    4488 status.go:371] multinode-350000 host status = "Stopped" (err=<nil>)
	I1209 16:42:10.471828    4488 status.go:384] host is not running, skipping remaining checks
	I1209 16:42:10.471831    4488 status.go:176] multinode-350000 status: &{Name:multinode-350000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 16:42:10.472842    1742 retry.go:31] will retry after 3.605967695s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr: exit status 7 (78.324917ms)

                                                
                                                
-- stdout --
	multinode-350000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:42:14.157456    4493 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:42:14.157682    4493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:14.157686    4493 out.go:358] Setting ErrFile to fd 2...
	I1209 16:42:14.157689    4493 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:14.157856    4493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:42:14.158019    4493 out.go:352] Setting JSON to false
	I1209 16:42:14.158040    4493 mustload.go:65] Loading cluster: multinode-350000
	I1209 16:42:14.158071    4493 notify.go:220] Checking for updates...
	I1209 16:42:14.158298    4493 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:42:14.158306    4493 status.go:174] checking status of multinode-350000 ...
	I1209 16:42:14.158615    4493 status.go:371] multinode-350000 host status = "Stopped" (err=<nil>)
	I1209 16:42:14.158619    4493 status.go:384] host is not running, skipping remaining checks
	I1209 16:42:14.158622    4493 status.go:176] multinode-350000 status: &{Name:multinode-350000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 16:42:14.159703    1742 retry.go:31] will retry after 6.191520141s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr: exit status 7 (78.600834ms)

                                                
                                                
-- stdout --
	multinode-350000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:42:20.430061    4499 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:42:20.430302    4499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:20.430306    4499 out.go:358] Setting ErrFile to fd 2...
	I1209 16:42:20.430309    4499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:20.430493    4499 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:42:20.430662    4499 out.go:352] Setting JSON to false
	I1209 16:42:20.430675    4499 mustload.go:65] Loading cluster: multinode-350000
	I1209 16:42:20.430727    4499 notify.go:220] Checking for updates...
	I1209 16:42:20.430933    4499 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:42:20.430943    4499 status.go:174] checking status of multinode-350000 ...
	I1209 16:42:20.431245    4499 status.go:371] multinode-350000 host status = "Stopped" (err=<nil>)
	I1209 16:42:20.431249    4499 status.go:384] host is not running, skipping remaining checks
	I1209 16:42:20.431252    4499 status.go:176] multinode-350000 status: &{Name:multinode-350000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 16:42:20.432282    1742 retry.go:31] will retry after 4.178919419s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr: exit status 7 (77.694042ms)

                                                
                                                
-- stdout --
	multinode-350000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:42:24.689214    4503 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:42:24.689424    4503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:24.689428    4503 out.go:358] Setting ErrFile to fd 2...
	I1209 16:42:24.689431    4503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:24.689577    4503 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:42:24.689719    4503 out.go:352] Setting JSON to false
	I1209 16:42:24.689731    4503 mustload.go:65] Loading cluster: multinode-350000
	I1209 16:42:24.689770    4503 notify.go:220] Checking for updates...
	I1209 16:42:24.689965    4503 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:42:24.689973    4503 status.go:174] checking status of multinode-350000 ...
	I1209 16:42:24.690251    4503 status.go:371] multinode-350000 host status = "Stopped" (err=<nil>)
	I1209 16:42:24.690255    4503 status.go:384] host is not running, skipping remaining checks
	I1209 16:42:24.690258    4503 status.go:176] multinode-350000 status: &{Name:multinode-350000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 16:42:24.691310    1742 retry.go:31] will retry after 8.006878497s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr: exit status 7 (78.61575ms)

                                                
                                                
-- stdout --
	multinode-350000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:42:32.777005    4507 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:42:32.777235    4507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:32.777239    4507 out.go:358] Setting ErrFile to fd 2...
	I1209 16:42:32.777242    4507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:32.777440    4507 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:42:32.777583    4507 out.go:352] Setting JSON to false
	I1209 16:42:32.777600    4507 mustload.go:65] Loading cluster: multinode-350000
	I1209 16:42:32.777640    4507 notify.go:220] Checking for updates...
	I1209 16:42:32.777856    4507 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:42:32.777865    4507 status.go:174] checking status of multinode-350000 ...
	I1209 16:42:32.778182    4507 status.go:371] multinode-350000 host status = "Stopped" (err=<nil>)
	I1209 16:42:32.778186    4507 status.go:384] host is not running, skipping remaining checks
	I1209 16:42:32.778189    4507 status.go:176] multinode-350000 status: &{Name:multinode-350000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1209 16:42:32.779195    1742 retry.go:31] will retry after 21.985648358s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr: exit status 7 (78.432584ms)

                                                
                                                
-- stdout --
	multinode-350000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:42:54.843587    4519 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:42:54.843777    4519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:54.843781    4519 out.go:358] Setting ErrFile to fd 2...
	I1209 16:42:54.843784    4519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:54.843944    4519 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:42:54.844105    4519 out.go:352] Setting JSON to false
	I1209 16:42:54.844116    4519 mustload.go:65] Loading cluster: multinode-350000
	I1209 16:42:54.844158    4519 notify.go:220] Checking for updates...
	I1209 16:42:54.844393    4519 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:42:54.844402    4519 status.go:174] checking status of multinode-350000 ...
	I1209 16:42:54.844742    4519 status.go:371] multinode-350000 host status = "Stopped" (err=<nil>)
	I1209 16:42:54.844746    4519 status.go:384] host is not running, skipping remaining checks
	I1209 16:42:54.844748    4519 status.go:176] multinode-350000 status: &{Name:multinode-350000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-350000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000: exit status 7 (36.086041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-350000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (48.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-350000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-350000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-350000: (3.376768375s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-350000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-350000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.236291958s)

                                                
                                                
-- stdout --
	* [multinode-350000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-350000" primary control-plane node in "multinode-350000" cluster
	* Restarting existing qemu2 VM for "multinode-350000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-350000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:42:58.362498    4545 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:42:58.362692    4545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:58.362696    4545 out.go:358] Setting ErrFile to fd 2...
	I1209 16:42:58.362699    4545 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:42:58.362872    4545 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:42:58.364095    4545 out.go:352] Setting JSON to false
	I1209 16:42:58.384440    4545 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4348,"bootTime":1733787030,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:42:58.384509    4545 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:42:58.389143    4545 out.go:177] * [multinode-350000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:42:58.395082    4545 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:42:58.395130    4545 notify.go:220] Checking for updates...
	I1209 16:42:58.403095    4545 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:42:58.406089    4545 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:42:58.409089    4545 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:42:58.412092    4545 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:42:58.414993    4545 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:42:58.418378    4545 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:42:58.418431    4545 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:42:58.422998    4545 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 16:42:58.430072    4545 start.go:297] selected driver: qemu2
	I1209 16:42:58.430080    4545 start.go:901] validating driver "qemu2" against &{Name:multinode-350000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-350000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:42:58.430156    4545 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:42:58.432738    4545 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:42:58.432765    4545 cni.go:84] Creating CNI manager for ""
	I1209 16:42:58.432788    4545 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 16:42:58.432833    4545 start.go:340] cluster config:
	{Name:multinode-350000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-350000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:42:58.437408    4545 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:42:58.446002    4545 out.go:177] * Starting "multinode-350000" primary control-plane node in "multinode-350000" cluster
	I1209 16:42:58.450068    4545 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:42:58.450084    4545 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:42:58.450100    4545 cache.go:56] Caching tarball of preloaded images
	I1209 16:42:58.450198    4545 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:42:58.450204    4545 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:42:58.450258    4545 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/multinode-350000/config.json ...
	I1209 16:42:58.450811    4545 start.go:360] acquireMachinesLock for multinode-350000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:42:58.450861    4545 start.go:364] duration metric: took 43.458µs to acquireMachinesLock for "multinode-350000"
	I1209 16:42:58.450870    4545 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:42:58.450875    4545 fix.go:54] fixHost starting: 
	I1209 16:42:58.451008    4545 fix.go:112] recreateIfNeeded on multinode-350000: state=Stopped err=<nil>
	W1209 16:42:58.451016    4545 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:42:58.455029    4545 out.go:177] * Restarting existing qemu2 VM for "multinode-350000" ...
	I1209 16:42:58.462885    4545 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:42:58.462920    4545 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:5d:dd:27:0d:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/disk.qcow2
	I1209 16:42:58.465123    4545 main.go:141] libmachine: STDOUT: 
	I1209 16:42:58.465141    4545 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:42:58.465171    4545 fix.go:56] duration metric: took 14.294958ms for fixHost
	I1209 16:42:58.465175    4545 start.go:83] releasing machines lock for "multinode-350000", held for 14.310417ms
	W1209 16:42:58.465187    4545 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:42:58.465240    4545 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:42:58.465244    4545 start.go:729] Will try again in 5 seconds ...
	I1209 16:43:03.467535    4545 start.go:360] acquireMachinesLock for multinode-350000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:43:03.468067    4545 start.go:364] duration metric: took 410.709µs to acquireMachinesLock for "multinode-350000"
	I1209 16:43:03.468203    4545 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:43:03.468224    4545 fix.go:54] fixHost starting: 
	I1209 16:43:03.468984    4545 fix.go:112] recreateIfNeeded on multinode-350000: state=Stopped err=<nil>
	W1209 16:43:03.469013    4545 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:43:03.473260    4545 out.go:177] * Restarting existing qemu2 VM for "multinode-350000" ...
	I1209 16:43:03.485153    4545 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:43:03.485322    4545 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:5d:dd:27:0d:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/disk.qcow2
	I1209 16:43:03.495345    4545 main.go:141] libmachine: STDOUT: 
	I1209 16:43:03.495397    4545 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:43:03.495480    4545 fix.go:56] duration metric: took 27.257625ms for fixHost
	I1209 16:43:03.495499    4545 start.go:83] releasing machines lock for "multinode-350000", held for 27.405042ms
	W1209 16:43:03.495655    4545 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-350000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-350000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:43:03.503186    4545 out.go:201] 
	W1209 16:43:03.507238    4545 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:43:03.507262    4545 out.go:270] * 
	* 
	W1209 16:43:03.509849    4545 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:43:03.517204    4545 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-350000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-350000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000: exit status 7 (36.402375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-350000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 node delete m03: exit status 83 (45.086208ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-350000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-350000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-350000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 status --alsologtostderr: exit status 7 (35.029334ms)

                                                
                                                
-- stdout --
	multinode-350000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:43:03.720748    4561 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:43:03.720930    4561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:43:03.720933    4561 out.go:358] Setting ErrFile to fd 2...
	I1209 16:43:03.720936    4561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:43:03.721088    4561 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:43:03.721215    4561 out.go:352] Setting JSON to false
	I1209 16:43:03.721225    4561 mustload.go:65] Loading cluster: multinode-350000
	I1209 16:43:03.721283    4561 notify.go:220] Checking for updates...
	I1209 16:43:03.721449    4561 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:43:03.721456    4561 status.go:174] checking status of multinode-350000 ...
	I1209 16:43:03.721708    4561 status.go:371] multinode-350000 host status = "Stopped" (err=<nil>)
	I1209 16:43:03.721712    4561 status.go:384] host is not running, skipping remaining checks
	I1209 16:43:03.721714    4561 status.go:176] multinode-350000 status: &{Name:multinode-350000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-350000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000: exit status 7 (34.580458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-350000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-350000 stop: (3.31445875s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 status: exit status 7 (69.473125ms)

                                                
                                                
-- stdout --
	multinode-350000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-350000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-350000 status --alsologtostderr: exit status 7 (35.616875ms)

                                                
                                                
-- stdout --
	multinode-350000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:43:07.175519    4588 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:43:07.175696    4588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:43:07.175699    4588 out.go:358] Setting ErrFile to fd 2...
	I1209 16:43:07.175702    4588 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:43:07.175835    4588 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:43:07.175958    4588 out.go:352] Setting JSON to false
	I1209 16:43:07.175968    4588 mustload.go:65] Loading cluster: multinode-350000
	I1209 16:43:07.176029    4588 notify.go:220] Checking for updates...
	I1209 16:43:07.176169    4588 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:43:07.176178    4588 status.go:174] checking status of multinode-350000 ...
	I1209 16:43:07.176418    4588 status.go:371] multinode-350000 host status = "Stopped" (err=<nil>)
	I1209 16:43:07.176422    4588 status.go:384] host is not running, skipping remaining checks
	I1209 16:43:07.176424    4588 status.go:176] multinode-350000 status: &{Name:multinode-350000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-350000 status --alsologtostderr": multinode-350000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-350000 status --alsologtostderr": multinode-350000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000: exit status 7 (34.175375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-350000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-350000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-350000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.188093875s)

                                                
                                                
-- stdout --
	* [multinode-350000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-350000" primary control-plane node in "multinode-350000" cluster
	* Restarting existing qemu2 VM for "multinode-350000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-350000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:43:07.244112    4592 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:43:07.244270    4592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:43:07.244274    4592 out.go:358] Setting ErrFile to fd 2...
	I1209 16:43:07.244277    4592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:43:07.244410    4592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:43:07.245450    4592 out.go:352] Setting JSON to false
	I1209 16:43:07.262980    4592 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4357,"bootTime":1733787030,"procs":532,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:43:07.263053    4592 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:43:07.268570    4592 out.go:177] * [multinode-350000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:43:07.274475    4592 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:43:07.274546    4592 notify.go:220] Checking for updates...
	I1209 16:43:07.282512    4592 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:43:07.286569    4592 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:43:07.289548    4592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:43:07.292582    4592 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:43:07.295555    4592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:43:07.298864    4592 config.go:182] Loaded profile config "multinode-350000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:43:07.299131    4592 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:43:07.303581    4592 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 16:43:07.310509    4592 start.go:297] selected driver: qemu2
	I1209 16:43:07.310518    4592 start.go:901] validating driver "qemu2" against &{Name:multinode-350000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-350000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:43:07.310576    4592 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:43:07.313100    4592 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:43:07.313126    4592 cni.go:84] Creating CNI manager for ""
	I1209 16:43:07.313147    4592 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1209 16:43:07.313185    4592 start.go:340] cluster config:
	{Name:multinode-350000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-350000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:43:07.317730    4592 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:43:07.324536    4592 out.go:177] * Starting "multinode-350000" primary control-plane node in "multinode-350000" cluster
	I1209 16:43:07.328574    4592 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:43:07.328591    4592 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:43:07.328602    4592 cache.go:56] Caching tarball of preloaded images
	I1209 16:43:07.328672    4592 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:43:07.328678    4592 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:43:07.328736    4592 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/multinode-350000/config.json ...
	I1209 16:43:07.329281    4592 start.go:360] acquireMachinesLock for multinode-350000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:43:07.329313    4592 start.go:364] duration metric: took 25.167µs to acquireMachinesLock for "multinode-350000"
	I1209 16:43:07.329322    4592 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:43:07.329327    4592 fix.go:54] fixHost starting: 
	I1209 16:43:07.329449    4592 fix.go:112] recreateIfNeeded on multinode-350000: state=Stopped err=<nil>
	W1209 16:43:07.329458    4592 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:43:07.337518    4592 out.go:177] * Restarting existing qemu2 VM for "multinode-350000" ...
	I1209 16:43:07.340452    4592 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:43:07.340497    4592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:5d:dd:27:0d:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/disk.qcow2
	I1209 16:43:07.342703    4592 main.go:141] libmachine: STDOUT: 
	I1209 16:43:07.342728    4592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:43:07.342755    4592 fix.go:56] duration metric: took 13.42875ms for fixHost
	I1209 16:43:07.342761    4592 start.go:83] releasing machines lock for "multinode-350000", held for 13.443417ms
	W1209 16:43:07.342767    4592 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:43:07.342806    4592 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:43:07.342811    4592 start.go:729] Will try again in 5 seconds ...
	I1209 16:43:12.345068    4592 start.go:360] acquireMachinesLock for multinode-350000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:43:12.345522    4592 start.go:364] duration metric: took 350.375µs to acquireMachinesLock for "multinode-350000"
	I1209 16:43:12.345687    4592 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:43:12.345706    4592 fix.go:54] fixHost starting: 
	I1209 16:43:12.346410    4592 fix.go:112] recreateIfNeeded on multinode-350000: state=Stopped err=<nil>
	W1209 16:43:12.346438    4592 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:43:12.350846    4592 out.go:177] * Restarting existing qemu2 VM for "multinode-350000" ...
	I1209 16:43:12.354884    4592 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:43:12.355189    4592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:5d:dd:27:0d:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/multinode-350000/disk.qcow2
	I1209 16:43:12.365449    4592 main.go:141] libmachine: STDOUT: 
	I1209 16:43:12.365501    4592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:43:12.365580    4592 fix.go:56] duration metric: took 19.874833ms for fixHost
	I1209 16:43:12.365597    4592 start.go:83] releasing machines lock for "multinode-350000", held for 20.050959ms
	W1209 16:43:12.365775    4592 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-350000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-350000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:43:12.373771    4592 out.go:201] 
	W1209 16:43:12.376868    4592 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:43:12.376915    4592 out.go:270] * 
	* 
	W1209 16:43:12.378899    4592 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:43:12.387835    4592 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-350000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000: exit status 7 (75.197083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-350000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-350000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-350000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-350000-m01 --driver=qemu2 : exit status 80 (9.927456167s)

                                                
                                                
-- stdout --
	* [multinode-350000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-350000-m01" primary control-plane node in "multinode-350000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-350000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-350000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-350000-m02 --driver=qemu2 
E1209 16:43:30.957936    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-350000-m02 --driver=qemu2 : exit status 80 (9.956224375s)

                                                
                                                
-- stdout --
	* [multinode-350000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-350000-m02" primary control-plane node in "multinode-350000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-350000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-350000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-350000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-350000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-350000: exit status 83 (86.401209ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-350000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-350000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-350000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-350000 -n multinode-350000: exit status 7 (35.387166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-350000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                    
x
+
TestPreload (10.03s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-957000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-957000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.872595208s)

                                                
                                                
-- stdout --
	* [test-preload-957000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-957000" primary control-plane node in "test-preload-957000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-957000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:43:32.762936    4660 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:43:32.763092    4660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:43:32.763096    4660 out.go:358] Setting ErrFile to fd 2...
	I1209 16:43:32.763098    4660 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:43:32.763476    4660 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:43:32.765198    4660 out.go:352] Setting JSON to false
	I1209 16:43:32.783411    4660 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4382,"bootTime":1733787030,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:43:32.783507    4660 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:43:32.790097    4660 out.go:177] * [test-preload-957000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:43:32.798106    4660 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:43:32.798150    4660 notify.go:220] Checking for updates...
	I1209 16:43:32.807099    4660 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:43:32.811102    4660 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:43:32.815116    4660 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:43:32.818077    4660 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:43:32.821063    4660 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:43:32.824400    4660 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:43:32.824464    4660 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:43:32.829091    4660 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:43:32.836014    4660 start.go:297] selected driver: qemu2
	I1209 16:43:32.836020    4660 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:43:32.836025    4660 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:43:32.838852    4660 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:43:32.842136    4660 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:43:32.846119    4660 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:43:32.846140    4660 cni.go:84] Creating CNI manager for ""
	I1209 16:43:32.846162    4660 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:43:32.846176    4660 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 16:43:32.846228    4660 start.go:340] cluster config:
	{Name:test-preload-957000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-957000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:43:32.851297    4660 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:43:32.859981    4660 out.go:177] * Starting "test-preload-957000" primary control-plane node in "test-preload-957000" cluster
	I1209 16:43:32.864030    4660 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1209 16:43:32.864139    4660 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/test-preload-957000/config.json ...
	I1209 16:43:32.864158    4660 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/test-preload-957000/config.json: {Name:mk05d489472404280c5efb5b42282722f5e9a96e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:43:32.864143    4660 cache.go:107] acquiring lock: {Name:mkc92f5b3033bc49eb857fe8afc652e5483485ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:43:32.864171    4660 cache.go:107] acquiring lock: {Name:mk39085637a288f092cc45da2fa839eaa673e1ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:43:32.864207    4660 cache.go:107] acquiring lock: {Name:mkc1213b449fe2f6ebee4948b62cc0cec281bbea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:43:32.864357    4660 cache.go:107] acquiring lock: {Name:mk5ae70f6639e6ff15a3db1828c0ea61abfc6324 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:43:32.864269    4660 cache.go:107] acquiring lock: {Name:mka909d77240bc974f91010950ab2662c44dabd5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:43:32.864415    4660 cache.go:107] acquiring lock: {Name:mka3c2bb9119a7362b26def46470224f69202449 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:43:32.864454    4660 cache.go:107] acquiring lock: {Name:mk3bc05523044406209d5411e3db450093062ed1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:43:32.864627    4660 start.go:360] acquireMachinesLock for test-preload-957000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:43:32.864630    4660 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1209 16:43:32.864842    4660 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1209 16:43:32.864911    4660 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1209 16:43:32.864921    4660 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:43:32.864928    4660 start.go:364] duration metric: took 290.25µs to acquireMachinesLock for "test-preload-957000"
	I1209 16:43:32.864942    4660 start.go:93] Provisioning new machine with config: &{Name:test-preload-957000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-957000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:43:32.864974    4660 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:43:32.864376    4660 cache.go:107] acquiring lock: {Name:mk68216179b1859ea7892d19f0aaeeb4aeb24d3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:43:32.865066    4660 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1209 16:43:32.865098    4660 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1209 16:43:32.865150    4660 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 16:43:32.865165    4660 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1209 16:43:32.870018    4660 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 16:43:32.877789    4660 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1209 16:43:32.878900    4660 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1209 16:43:32.879302    4660 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1209 16:43:32.879298    4660 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1209 16:43:32.880832    4660 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:43:32.880844    4660 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 16:43:32.880952    4660 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1209 16:43:32.881001    4660 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1209 16:43:32.888946    4660 start.go:159] libmachine.API.Create for "test-preload-957000" (driver="qemu2")
	I1209 16:43:32.888968    4660 client.go:168] LocalClient.Create starting
	I1209 16:43:32.889050    4660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:43:32.889095    4660 main.go:141] libmachine: Decoding PEM data...
	I1209 16:43:32.889134    4660 main.go:141] libmachine: Parsing certificate...
	I1209 16:43:32.889172    4660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:43:32.889205    4660 main.go:141] libmachine: Decoding PEM data...
	I1209 16:43:32.889215    4660 main.go:141] libmachine: Parsing certificate...
	I1209 16:43:32.889616    4660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:43:33.053254    4660 main.go:141] libmachine: Creating SSH key...
	I1209 16:43:33.096439    4660 main.go:141] libmachine: Creating Disk image...
	I1209 16:43:33.096456    4660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:43:33.096675    4660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/disk.qcow2
	I1209 16:43:33.106247    4660 main.go:141] libmachine: STDOUT: 
	I1209 16:43:33.106263    4660 main.go:141] libmachine: STDERR: 
	I1209 16:43:33.106324    4660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/disk.qcow2 +20000M
	I1209 16:43:33.115388    4660 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:43:33.115413    4660 main.go:141] libmachine: STDERR: 
	I1209 16:43:33.115427    4660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/disk.qcow2
	I1209 16:43:33.115431    4660 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:43:33.115440    4660 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:43:33.115468    4660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:4a:d0:e3:b9:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/disk.qcow2
	I1209 16:43:33.118221    4660 main.go:141] libmachine: STDOUT: 
	I1209 16:43:33.118240    4660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:43:33.118260    4660 client.go:171] duration metric: took 229.287ms to LocalClient.Create
	I1209 16:43:33.449758    4660 cache.go:162] opening:  /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1209 16:43:33.465795    4660 cache.go:162] opening:  /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1209 16:43:33.473791    4660 cache.go:162] opening:  /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1209 16:43:33.522617    4660 cache.go:162] opening:  /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1209 16:43:33.621963    4660 cache.go:157] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1209 16:43:33.621984    4660 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 757.765833ms
	I1209 16:43:33.621994    4660 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1209 16:43:33.658273    4660 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1209 16:43:33.658302    4660 cache.go:162] opening:  /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1209 16:43:33.710696    4660 cache.go:162] opening:  /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1209 16:43:33.773196    4660 cache.go:162] opening:  /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W1209 16:43:34.135071    4660 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1209 16:43:34.135178    4660 cache.go:162] opening:  /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1209 16:43:34.622223    4660 cache.go:157] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1209 16:43:34.622285    4660 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.7581465s
	I1209 16:43:34.622312    4660 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1209 16:43:35.118519    4660 start.go:128] duration metric: took 2.253524375s to createHost
	I1209 16:43:35.118580    4660 start.go:83] releasing machines lock for "test-preload-957000", held for 2.253648583s
	W1209 16:43:35.118636    4660 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:43:35.135617    4660 out.go:177] * Deleting "test-preload-957000" in qemu2 ...
	W1209 16:43:35.166204    4660 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:43:35.166235    4660 start.go:729] Will try again in 5 seconds ...
	I1209 16:43:36.143642    4660 cache.go:157] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1209 16:43:36.143711    4660 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.279317166s
	I1209 16:43:36.143745    4660 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1209 16:43:36.662844    4660 cache.go:157] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1209 16:43:36.662893    4660 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.798750584s
	I1209 16:43:36.662915    4660 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1209 16:43:38.643284    4660 cache.go:157] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1209 16:43:38.643361    4660 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.779233083s
	I1209 16:43:38.643386    4660 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1209 16:43:38.748325    4660 cache.go:157] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1209 16:43:38.748365    4660 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.884049709s
	I1209 16:43:38.748407    4660 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1209 16:43:38.903770    4660 cache.go:157] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1209 16:43:38.903816    4660 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.039532042s
	I1209 16:43:38.903880    4660 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1209 16:43:40.166604    4660 start.go:360] acquireMachinesLock for test-preload-957000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:43:40.167060    4660 start.go:364] duration metric: took 381.375µs to acquireMachinesLock for "test-preload-957000"
	I1209 16:43:40.167211    4660 start.go:93] Provisioning new machine with config: &{Name:test-preload-957000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-957000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:43:40.167569    4660 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:43:40.187121    4660 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 16:43:40.235543    4660 start.go:159] libmachine.API.Create for "test-preload-957000" (driver="qemu2")
	I1209 16:43:40.235596    4660 client.go:168] LocalClient.Create starting
	I1209 16:43:40.235736    4660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:43:40.235827    4660 main.go:141] libmachine: Decoding PEM data...
	I1209 16:43:40.235852    4660 main.go:141] libmachine: Parsing certificate...
	I1209 16:43:40.235944    4660 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:43:40.236000    4660 main.go:141] libmachine: Decoding PEM data...
	I1209 16:43:40.236012    4660 main.go:141] libmachine: Parsing certificate...
	I1209 16:43:40.236591    4660 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:43:40.407541    4660 main.go:141] libmachine: Creating SSH key...
	I1209 16:43:40.529821    4660 main.go:141] libmachine: Creating Disk image...
	I1209 16:43:40.529827    4660 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:43:40.530037    4660 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/disk.qcow2
	I1209 16:43:40.540234    4660 main.go:141] libmachine: STDOUT: 
	I1209 16:43:40.540255    4660 main.go:141] libmachine: STDERR: 
	I1209 16:43:40.540313    4660 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/disk.qcow2 +20000M
	I1209 16:43:40.548998    4660 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:43:40.549013    4660 main.go:141] libmachine: STDERR: 
	I1209 16:43:40.549025    4660 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/disk.qcow2
	I1209 16:43:40.549029    4660 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:43:40.549039    4660 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:43:40.549068    4660 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:f5:54:f1:84:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/test-preload-957000/disk.qcow2
	I1209 16:43:40.550953    4660 main.go:141] libmachine: STDOUT: 
	I1209 16:43:40.550968    4660 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:43:40.550982    4660 client.go:171] duration metric: took 315.381667ms to LocalClient.Create
	I1209 16:43:41.461806    4660 cache.go:157] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I1209 16:43:41.461865    4660 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.597515958s
	I1209 16:43:41.461898    4660 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I1209 16:43:41.461945    4660 cache.go:87] Successfully saved all images to host disk.
	I1209 16:43:42.553301    4660 start.go:128] duration metric: took 2.385669166s to createHost
	I1209 16:43:42.553439    4660 start.go:83] releasing machines lock for "test-preload-957000", held for 2.386363958s
	W1209 16:43:42.553809    4660 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-957000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-957000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:43:42.562558    4660 out.go:201] 
	W1209 16:43:42.574463    4660 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:43:42.574489    4660 out.go:270] * 
	* 
	W1209 16:43:42.577124    4660 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:43:42.588435    4660 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-957000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-12-09 16:43:42.606433 -0800 PST m=+3654.799382459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-957000 -n test-preload-957000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-957000 -n test-preload-957000: exit status 7 (71.216334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-957000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-957000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-957000
--- FAIL: TestPreload (10.03s)

                                                
                                    
x
+
TestScheduledStopUnix (10.11s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-769000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-769000 --memory=2048 --driver=qemu2 : exit status 80 (9.950627125s)

                                                
                                                
-- stdout --
	* [scheduled-stop-769000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-769000" primary control-plane node in "scheduled-stop-769000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-769000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-769000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-769000" primary control-plane node in "scheduled-stop-769000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-769000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-769000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-12-09 16:43:52.717126 -0800 PST m=+3664.910113376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-769000 -n scheduled-stop-769000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-769000 -n scheduled-stop-769000: exit status 7 (72.687ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-769000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-769000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-769000
--- FAIL: TestScheduledStopUnix (10.11s)

                                                
                                    
x
+
TestSkaffold (12.69s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe368134542 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe368134542 version: (1.015296458s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-304000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-304000 --memory=2600 --driver=qemu2 : exit status 80 (9.976417625s)

                                                
                                                
-- stdout --
	* [skaffold-304000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-304000" primary control-plane node in "skaffold-304000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-304000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-304000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-304000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-304000" primary control-plane node in "skaffold-304000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-304000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-304000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-12-09 16:44:05.414447 -0800 PST m=+3677.607481209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-304000 -n skaffold-304000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-304000 -n skaffold-304000: exit status 7 (67.464291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-304000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-304000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-304000
--- FAIL: TestSkaffold (12.69s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (605.3s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.700052791 start -p running-upgrade-688000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.700052791 start -p running-upgrade-688000 --memory=2200 --vm-driver=qemu2 : (55.270723667s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-688000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E1209 16:46:39.902988    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-688000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m36.073651541s)

                                                
                                                
-- stdout --
	* [running-upgrade-688000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-688000" primary control-plane node in "running-upgrade-688000" cluster
	* Updating the running qemu2 "running-upgrade-688000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:45:47.332276    5132 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:45:47.332450    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:45:47.332453    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:45:47.332456    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:45:47.332580    5132 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:45:47.333699    5132 out.go:352] Setting JSON to false
	I1209 16:45:47.352701    5132 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4517,"bootTime":1733787030,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:45:47.352774    5132 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:45:47.357950    5132 out.go:177] * [running-upgrade-688000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:45:47.364946    5132 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:45:47.365003    5132 notify.go:220] Checking for updates...
	I1209 16:45:47.372849    5132 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:45:47.376847    5132 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:45:47.379938    5132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:45:47.382879    5132 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:45:47.385915    5132 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:45:47.389029    5132 config.go:182] Loaded profile config "running-upgrade-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:45:47.391798    5132 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 16:45:47.394920    5132 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:45:47.398839    5132 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 16:45:47.405880    5132 start.go:297] selected driver: qemu2
	I1209 16:45:47.405885    5132 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:64988 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 16:45:47.405930    5132 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:45:47.408797    5132 cni.go:84] Creating CNI manager for ""
	I1209 16:45:47.408831    5132 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:45:47.408852    5132 start.go:340] cluster config:
	{Name:running-upgrade-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:64988 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 16:45:47.408906    5132 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:45:47.417821    5132 out.go:177] * Starting "running-upgrade-688000" primary control-plane node in "running-upgrade-688000" cluster
	I1209 16:45:47.420693    5132 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1209 16:45:47.420708    5132 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1209 16:45:47.420720    5132 cache.go:56] Caching tarball of preloaded images
	I1209 16:45:47.420795    5132 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:45:47.420801    5132 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1209 16:45:47.420851    5132 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/config.json ...
	I1209 16:45:47.421383    5132 start.go:360] acquireMachinesLock for running-upgrade-688000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:45:47.421442    5132 start.go:364] duration metric: took 52.709µs to acquireMachinesLock for "running-upgrade-688000"
	I1209 16:45:47.421451    5132 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:45:47.421457    5132 fix.go:54] fixHost starting: 
	I1209 16:45:47.422168    5132 fix.go:112] recreateIfNeeded on running-upgrade-688000: state=Running err=<nil>
	W1209 16:45:47.422178    5132 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:45:47.429854    5132 out.go:177] * Updating the running qemu2 "running-upgrade-688000" VM ...
	I1209 16:45:47.433788    5132 machine.go:93] provisionDockerMachine start ...
	I1209 16:45:47.433856    5132 main.go:141] libmachine: Using SSH client type: native
	I1209 16:45:47.433983    5132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102deefc0] 0x102df1800 <nil>  [] 0s} localhost 64956 <nil> <nil>}
	I1209 16:45:47.433989    5132 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 16:45:47.497289    5132 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-688000
	
	I1209 16:45:47.497304    5132 buildroot.go:166] provisioning hostname "running-upgrade-688000"
	I1209 16:45:47.497357    5132 main.go:141] libmachine: Using SSH client type: native
	I1209 16:45:47.497473    5132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102deefc0] 0x102df1800 <nil>  [] 0s} localhost 64956 <nil> <nil>}
	I1209 16:45:47.497480    5132 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-688000 && echo "running-upgrade-688000" | sudo tee /etc/hostname
	I1209 16:45:47.561769    5132 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-688000
	
	I1209 16:45:47.561844    5132 main.go:141] libmachine: Using SSH client type: native
	I1209 16:45:47.561955    5132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102deefc0] 0x102df1800 <nil>  [] 0s} localhost 64956 <nil> <nil>}
	I1209 16:45:47.561964    5132 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-688000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-688000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-688000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 16:45:47.620616    5132 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 16:45:47.620627    5132 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20062-1231/.minikube CaCertPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20062-1231/.minikube}
	I1209 16:45:47.620636    5132 buildroot.go:174] setting up certificates
	I1209 16:45:47.620641    5132 provision.go:84] configureAuth start
	I1209 16:45:47.620650    5132 provision.go:143] copyHostCerts
	I1209 16:45:47.620729    5132 exec_runner.go:144] found /Users/jenkins/minikube-integration/20062-1231/.minikube/cert.pem, removing ...
	I1209 16:45:47.620736    5132 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20062-1231/.minikube/cert.pem
	I1209 16:45:47.620881    5132 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20062-1231/.minikube/cert.pem (1123 bytes)
	I1209 16:45:47.621123    5132 exec_runner.go:144] found /Users/jenkins/minikube-integration/20062-1231/.minikube/key.pem, removing ...
	I1209 16:45:47.621126    5132 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20062-1231/.minikube/key.pem
	I1209 16:45:47.621189    5132 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20062-1231/.minikube/key.pem (1675 bytes)
	I1209 16:45:47.621331    5132 exec_runner.go:144] found /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.pem, removing ...
	I1209 16:45:47.621335    5132 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.pem
	I1209 16:45:47.621394    5132 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.pem (1082 bytes)
	I1209 16:45:47.621506    5132 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-688000 san=[127.0.0.1 localhost minikube running-upgrade-688000]
	I1209 16:45:47.687285    5132 provision.go:177] copyRemoteCerts
	I1209 16:45:47.687324    5132 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 16:45:47.687331    5132 sshutil.go:53] new ssh client: &{IP:localhost Port:64956 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/running-upgrade-688000/id_rsa Username:docker}
	I1209 16:45:47.718998    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 16:45:47.725974    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1209 16:45:47.732801    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1209 16:45:47.739910    5132 provision.go:87] duration metric: took 119.255542ms to configureAuth
	I1209 16:45:47.739919    5132 buildroot.go:189] setting minikube options for container-runtime
	I1209 16:45:47.740032    5132 config.go:182] Loaded profile config "running-upgrade-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:45:47.740078    5132 main.go:141] libmachine: Using SSH client type: native
	I1209 16:45:47.740172    5132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102deefc0] 0x102df1800 <nil>  [] 0s} localhost 64956 <nil> <nil>}
	I1209 16:45:47.740181    5132 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1209 16:45:47.801043    5132 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1209 16:45:47.801051    5132 buildroot.go:70] root file system type: tmpfs
	I1209 16:45:47.801107    5132 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1209 16:45:47.801182    5132 main.go:141] libmachine: Using SSH client type: native
	I1209 16:45:47.801302    5132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102deefc0] 0x102df1800 <nil>  [] 0s} localhost 64956 <nil> <nil>}
	I1209 16:45:47.801335    5132 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1209 16:45:47.863600    5132 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1209 16:45:47.863661    5132 main.go:141] libmachine: Using SSH client type: native
	I1209 16:45:47.863765    5132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102deefc0] 0x102df1800 <nil>  [] 0s} localhost 64956 <nil> <nil>}
	I1209 16:45:47.863773    5132 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1209 16:45:47.924561    5132 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 16:45:47.924574    5132 machine.go:96] duration metric: took 490.781083ms to provisionDockerMachine
	I1209 16:45:47.924579    5132 start.go:293] postStartSetup for "running-upgrade-688000" (driver="qemu2")
	I1209 16:45:47.924585    5132 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 16:45:47.924644    5132 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 16:45:47.924654    5132 sshutil.go:53] new ssh client: &{IP:localhost Port:64956 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/running-upgrade-688000/id_rsa Username:docker}
	I1209 16:45:47.955479    5132 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 16:45:47.956806    5132 info.go:137] Remote host: Buildroot 2021.02.12
	I1209 16:45:47.956814    5132 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20062-1231/.minikube/addons for local assets ...
	I1209 16:45:47.956892    5132 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20062-1231/.minikube/files for local assets ...
	I1209 16:45:47.957039    5132 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20062-1231/.minikube/files/etc/ssl/certs/17422.pem -> 17422.pem in /etc/ssl/certs
	I1209 16:45:47.957207    5132 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 16:45:47.959753    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/files/etc/ssl/certs/17422.pem --> /etc/ssl/certs/17422.pem (1708 bytes)
	I1209 16:45:47.966490    5132 start.go:296] duration metric: took 41.906709ms for postStartSetup
	I1209 16:45:47.966502    5132 fix.go:56] duration metric: took 545.049ms for fixHost
	I1209 16:45:47.966539    5132 main.go:141] libmachine: Using SSH client type: native
	I1209 16:45:47.966636    5132 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102deefc0] 0x102df1800 <nil>  [] 0s} localhost 64956 <nil> <nil>}
	I1209 16:45:47.966640    5132 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 16:45:48.026427    5132 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733791547.801917138
	
	I1209 16:45:48.026437    5132 fix.go:216] guest clock: 1733791547.801917138
	I1209 16:45:48.026441    5132 fix.go:229] Guest: 2024-12-09 16:45:47.801917138 -0800 PST Remote: 2024-12-09 16:45:47.966503 -0800 PST m=+0.656615918 (delta=-164.585862ms)
	I1209 16:45:48.026451    5132 fix.go:200] guest clock delta is within tolerance: -164.585862ms
	I1209 16:45:48.026455    5132 start.go:83] releasing machines lock for "running-upgrade-688000", held for 605.009625ms
	I1209 16:45:48.026536    5132 ssh_runner.go:195] Run: cat /version.json
	I1209 16:45:48.026546    5132 sshutil.go:53] new ssh client: &{IP:localhost Port:64956 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/running-upgrade-688000/id_rsa Username:docker}
	I1209 16:45:48.026536    5132 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 16:45:48.026595    5132 sshutil.go:53] new ssh client: &{IP:localhost Port:64956 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/running-upgrade-688000/id_rsa Username:docker}
	W1209 16:45:48.027099    5132 sshutil.go:64] dial failure (will retry): dial tcp [::1]:64956: connect: connection refused
	I1209 16:45:48.027117    5132 retry.go:31] will retry after 333.83801ms: dial tcp [::1]:64956: connect: connection refused
	W1209 16:45:48.056421    5132 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1209 16:45:48.056468    5132 ssh_runner.go:195] Run: systemctl --version
	I1209 16:45:48.058417    5132 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 16:45:48.060022    5132 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 16:45:48.060057    5132 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1209 16:45:48.062842    5132 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1209 16:45:48.067628    5132 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 16:45:48.067635    5132 start.go:495] detecting cgroup driver to use...
	I1209 16:45:48.067690    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 16:45:48.073312    5132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1209 16:45:48.076716    5132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 16:45:48.080360    5132 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 16:45:48.080388    5132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 16:45:48.083931    5132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 16:45:48.087193    5132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 16:45:48.089994    5132 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 16:45:48.093082    5132 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 16:45:48.096505    5132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 16:45:48.099880    5132 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 16:45:48.102811    5132 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 16:45:48.105723    5132 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 16:45:48.110932    5132 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 16:45:48.113764    5132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:45:48.210302    5132 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 16:45:48.216704    5132 start.go:495] detecting cgroup driver to use...
	I1209 16:45:48.216776    5132 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1209 16:45:48.224613    5132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 16:45:48.229625    5132 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 16:45:48.235943    5132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 16:45:48.240844    5132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 16:45:48.245870    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 16:45:48.251352    5132 ssh_runner.go:195] Run: which cri-dockerd
	I1209 16:45:48.252812    5132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1209 16:45:48.255994    5132 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1209 16:45:48.261134    5132 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1209 16:45:48.353945    5132 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1209 16:45:48.436815    5132 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1209 16:45:48.436866    5132 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1209 16:45:48.442425    5132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:45:48.530802    5132 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1209 16:46:01.375426    5132 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.844655042s)
	I1209 16:46:01.375502    5132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1209 16:46:01.379974    5132 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1209 16:46:01.388265    5132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1209 16:46:01.393964    5132 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1209 16:46:01.470478    5132 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1209 16:46:01.550646    5132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:46:01.628697    5132 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1209 16:46:01.634777    5132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1209 16:46:01.639364    5132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:46:01.716081    5132 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1209 16:46:01.754468    5132 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1209 16:46:01.754556    5132 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1209 16:46:01.757367    5132 start.go:563] Will wait 60s for crictl version
	I1209 16:46:01.757426    5132 ssh_runner.go:195] Run: which crictl
	I1209 16:46:01.758714    5132 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 16:46:01.770687    5132 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1209 16:46:01.770767    5132 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1209 16:46:01.787482    5132 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1209 16:46:01.805600    5132 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1209 16:46:01.805757    5132 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1209 16:46:01.807088    5132 kubeadm.go:883] updating cluster {Name:running-upgrade-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:64988 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1209 16:46:01.807134    5132 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1209 16:46:01.807177    5132 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1209 16:46:01.817388    5132 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1209 16:46:01.817396    5132 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1209 16:46:01.817452    5132 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1209 16:46:01.821053    5132 ssh_runner.go:195] Run: which lz4
	I1209 16:46:01.822446    5132 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 16:46:01.823736    5132 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 16:46:01.823747    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1209 16:46:02.785012    5132 docker.go:653] duration metric: took 962.616ms to copy over tarball
	I1209 16:46:02.785088    5132 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 16:46:04.007003    5132 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.221905s)
	I1209 16:46:04.007017    5132 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 16:46:04.022648    5132 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1209 16:46:04.025689    5132 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1209 16:46:04.030925    5132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:46:04.100568    5132 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1209 16:46:04.323197    5132 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1209 16:46:04.335268    5132 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1209 16:46:04.335277    5132 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1209 16:46:04.335282    5132 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 16:46:04.339914    5132 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:46:04.342088    5132 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 16:46:04.344389    5132 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:46:04.344415    5132 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 16:46:04.346164    5132 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 16:46:04.346299    5132 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 16:46:04.347664    5132 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 16:46:04.347686    5132 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 16:46:04.349289    5132 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 16:46:04.349310    5132 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1209 16:46:04.349970    5132 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 16:46:04.350474    5132 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1209 16:46:04.351670    5132 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1209 16:46:04.351864    5132 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 16:46:04.352436    5132 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1209 16:46:04.353581    5132 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 16:46:04.947664    5132 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1209 16:46:04.958190    5132 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1209 16:46:04.958221    5132 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 16:46:04.958274    5132 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1209 16:46:04.968995    5132 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1209 16:46:04.969646    5132 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1209 16:46:04.980989    5132 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1209 16:46:04.981025    5132 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 16:46:04.981083    5132 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1209 16:46:04.991245    5132 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1209 16:46:05.022606    5132 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 16:46:05.033851    5132 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1209 16:46:05.033873    5132 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 16:46:05.033929    5132 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 16:46:05.038284    5132 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1209 16:46:05.050370    5132 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1209 16:46:05.051784    5132 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1209 16:46:05.051800    5132 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 16:46:05.051864    5132 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1209 16:46:05.067329    5132 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1209 16:46:05.094298    5132 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1209 16:46:05.104450    5132 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1209 16:46:05.104467    5132 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1209 16:46:05.104522    5132 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1209 16:46:05.114770    5132 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1209 16:46:05.114909    5132 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1209 16:46:05.116605    5132 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1209 16:46:05.116616    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1209 16:46:05.166214    5132 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1209 16:46:05.198368    5132 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1209 16:46:05.198391    5132 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1209 16:46:05.198456    5132 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	W1209 16:46:05.206213    5132 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1209 16:46:05.206340    5132 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1209 16:46:05.228465    5132 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1209 16:46:05.228626    5132 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1209 16:46:05.239504    5132 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1209 16:46:05.239529    5132 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 16:46:05.239602    5132 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1209 16:46:05.245036    5132 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1209 16:46:05.245057    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1209 16:46:05.270070    5132 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1209 16:46:05.270220    5132 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1209 16:46:05.287820    5132 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1209 16:46:05.287834    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1209 16:46:05.288861    5132 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1209 16:46:05.288881    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	W1209 16:46:05.301893    5132 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1209 16:46:05.301998    5132 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:46:05.380526    5132 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1209 16:46:05.380568    5132 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1209 16:46:05.380591    5132 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:46:05.380650    5132 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:46:05.409766    5132 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1209 16:46:05.409779    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1209 16:46:06.220127    5132 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1209 16:46:06.220178    5132 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1209 16:46:06.220236    5132 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1209 16:46:06.220255    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1209 16:46:06.220594    5132 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1209 16:46:06.395826    5132 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1209 16:46:06.395886    5132 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1209 16:46:06.395910    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1209 16:46:06.429398    5132 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1209 16:46:06.429412    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1209 16:46:06.662156    5132 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1209 16:46:06.662199    5132 cache_images.go:92] duration metric: took 2.326918458s to LoadCachedImages
	W1209 16:46:06.662242    5132 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1209 16:46:06.662247    5132 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1209 16:46:06.662306    5132 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-688000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 16:46:06.662387    5132 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1209 16:46:06.676253    5132 cni.go:84] Creating CNI manager for ""
	I1209 16:46:06.676264    5132 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:46:06.676274    5132 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 16:46:06.676282    5132 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-688000 NodeName:running-upgrade-688000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 16:46:06.676360    5132 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-688000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 16:46:06.676441    5132 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1209 16:46:06.679226    5132 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 16:46:06.679267    5132 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 16:46:06.682339    5132 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1209 16:46:06.687496    5132 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 16:46:06.694016    5132 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1209 16:46:06.699711    5132 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1209 16:46:06.701168    5132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:46:06.769993    5132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 16:46:06.775486    5132 certs.go:68] Setting up /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000 for IP: 10.0.2.15
	I1209 16:46:06.775492    5132 certs.go:194] generating shared ca certs ...
	I1209 16:46:06.775502    5132 certs.go:226] acquiring lock for ca certs: {Name:mk94909c12771095ef5e42af3f5ec988b0b9c452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:46:06.775685    5132 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.key
	I1209 16:46:06.775748    5132 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/proxy-client-ca.key
	I1209 16:46:06.775755    5132 certs.go:256] generating profile certs ...
	I1209 16:46:06.775815    5132 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/client.key
	I1209 16:46:06.775831    5132 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/apiserver.key.c9fb23b3
	I1209 16:46:06.775843    5132 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/apiserver.crt.c9fb23b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1209 16:46:06.870057    5132 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/apiserver.crt.c9fb23b3 ...
	I1209 16:46:06.870063    5132 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/apiserver.crt.c9fb23b3: {Name:mk26d4e75fa16b8ba9f7ac41cc410018bd55322b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:46:06.870319    5132 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/apiserver.key.c9fb23b3 ...
	I1209 16:46:06.870324    5132 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/apiserver.key.c9fb23b3: {Name:mke5d7da2777546479ac20655a49dc2592bb0959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:46:06.870490    5132 certs.go:381] copying /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/apiserver.crt.c9fb23b3 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/apiserver.crt
	I1209 16:46:06.870624    5132 certs.go:385] copying /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/apiserver.key.c9fb23b3 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/apiserver.key
	I1209 16:46:06.870772    5132 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/proxy-client.key
	I1209 16:46:06.870904    5132 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/1742.pem (1338 bytes)
	W1209 16:46:06.870940    5132 certs.go:480] ignoring /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/1742_empty.pem, impossibly tiny 0 bytes
	I1209 16:46:06.870946    5132 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 16:46:06.870966    5132 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem (1082 bytes)
	I1209 16:46:06.870985    5132 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem (1123 bytes)
	I1209 16:46:06.871004    5132 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/key.pem (1675 bytes)
	I1209 16:46:06.871043    5132 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/files/etc/ssl/certs/17422.pem (1708 bytes)
	I1209 16:46:06.871350    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 16:46:06.878567    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 16:46:06.885297    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 16:46:06.892560    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 16:46:06.900358    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 16:46:06.907842    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 16:46:06.915634    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 16:46:06.922646    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 16:46:06.929513    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 16:46:06.936519    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/1742.pem --> /usr/share/ca-certificates/1742.pem (1338 bytes)
	I1209 16:46:06.943952    5132 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/files/etc/ssl/certs/17422.pem --> /usr/share/ca-certificates/17422.pem (1708 bytes)
	I1209 16:46:06.951549    5132 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 16:46:06.956989    5132 ssh_runner.go:195] Run: openssl version
	I1209 16:46:06.959018    5132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1742.pem && ln -fs /usr/share/ca-certificates/1742.pem /etc/ssl/certs/1742.pem"
	I1209 16:46:06.962028    5132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1742.pem
	I1209 16:46:06.963595    5132 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:51 /usr/share/ca-certificates/1742.pem
	I1209 16:46:06.963629    5132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1742.pem
	I1209 16:46:06.965493    5132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1742.pem /etc/ssl/certs/51391683.0"
	I1209 16:46:06.968331    5132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17422.pem && ln -fs /usr/share/ca-certificates/17422.pem /etc/ssl/certs/17422.pem"
	I1209 16:46:06.971634    5132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17422.pem
	I1209 16:46:06.972980    5132 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:51 /usr/share/ca-certificates/17422.pem
	I1209 16:46:06.973005    5132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17422.pem
	I1209 16:46:06.974836    5132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17422.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 16:46:06.977578    5132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 16:46:06.980589    5132 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 16:46:06.982176    5132 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:43 /usr/share/ca-certificates/minikubeCA.pem
	I1209 16:46:06.982204    5132 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 16:46:06.983972    5132 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 16:46:06.987235    5132 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 16:46:06.988909    5132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 16:46:06.991304    5132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 16:46:06.993425    5132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 16:46:06.995738    5132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 16:46:06.997870    5132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 16:46:06.999734    5132 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 16:46:07.001652    5132 kubeadm.go:392] StartCluster: {Name:running-upgrade-688000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:64988 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-688000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 16:46:07.001731    5132 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1209 16:46:07.012025    5132 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 16:46:07.015228    5132 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 16:46:07.015244    5132 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 16:46:07.015277    5132 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 16:46:07.018253    5132 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 16:46:07.018538    5132 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-688000" does not appear in /Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:46:07.018595    5132 kubeconfig.go:62] /Users/jenkins/minikube-integration/20062-1231/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-688000" cluster setting kubeconfig missing "running-upgrade-688000" context setting]
	I1209 16:46:07.018768    5132 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/kubeconfig: {Name:mk5092322010dd3bee2f23e3f2812067ca57270f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:46:07.019225    5132 kapi.go:59] client config for running-upgrade-688000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/client.key", CAFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10484b740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 16:46:07.019557    5132 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 16:46:07.022263    5132 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-688000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1209 16:46:07.022269    5132 kubeadm.go:1160] stopping kube-system containers ...
	I1209 16:46:07.022315    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1209 16:46:07.033244    5132 docker.go:483] Stopping containers: [9a816024adda 40f038c63a94 c2f63d218d3e 88016bd623e1 fbb0d697e7d5 83ee3da72236 65ecdbb23e21 0bc90c80583b 74824a4c15e7 8cd7609e0999 c213227f7b0d 0f2f53c34aff d99713dccef7 7929fdd855bc]
	I1209 16:46:07.033310    5132 ssh_runner.go:195] Run: docker stop 9a816024adda 40f038c63a94 c2f63d218d3e 88016bd623e1 fbb0d697e7d5 83ee3da72236 65ecdbb23e21 0bc90c80583b 74824a4c15e7 8cd7609e0999 c213227f7b0d 0f2f53c34aff d99713dccef7 7929fdd855bc
	I1209 16:46:07.044407    5132 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 16:46:07.125930    5132 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 16:46:07.129627    5132 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Dec 10 00:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Dec 10 00:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec 10 00:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Dec 10 00:45 /etc/kubernetes/scheduler.conf
	
	I1209 16:46:07.129675    5132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/admin.conf
	I1209 16:46:07.132454    5132 kubeadm.go:163] "https://control-plane.minikube.internal:64988" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 16:46:07.132491    5132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 16:46:07.135454    5132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/kubelet.conf
	I1209 16:46:07.138796    5132 kubeadm.go:163] "https://control-plane.minikube.internal:64988" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 16:46:07.138887    5132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 16:46:07.144642    5132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/controller-manager.conf
	I1209 16:46:07.147463    5132 kubeadm.go:163] "https://control-plane.minikube.internal:64988" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 16:46:07.147494    5132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 16:46:07.150346    5132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/scheduler.conf
	I1209 16:46:07.153527    5132 kubeadm.go:163] "https://control-plane.minikube.internal:64988" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1209 16:46:07.153564    5132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 16:46:07.163575    5132 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 16:46:07.180234    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 16:46:07.210431    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 16:46:07.984483    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 16:46:08.197428    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 16:46:08.223175    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 16:46:08.245937    5132 api_server.go:52] waiting for apiserver process to appear ...
	I1209 16:46:08.246027    5132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 16:46:08.748222    5132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 16:46:09.248117    5132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 16:46:09.252456    5132 api_server.go:72] duration metric: took 1.00652575s to wait for apiserver process to appear ...
	I1209 16:46:09.252465    5132 api_server.go:88] waiting for apiserver healthz status ...
	I1209 16:46:09.252482    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:46:14.254689    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:46:14.254824    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:46:19.255889    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:46:19.256012    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:46:24.257235    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:46:24.257321    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:46:29.258567    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:46:29.258604    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:46:34.259039    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:46:34.259090    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:46:39.260714    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:46:39.260822    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:46:44.263236    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:46:44.263332    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:46:49.265991    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:46:49.266088    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:46:54.268826    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:46:54.268917    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:46:59.271470    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:46:59.271574    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:47:04.274283    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:47:04.274370    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:47:09.276843    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:47:09.277374    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:47:09.316276    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:47:09.316412    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:47:09.341679    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:47:09.341782    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:47:09.359197    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:47:09.359278    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:47:09.370607    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:47:09.370698    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:47:09.380696    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:47:09.380760    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:47:09.391158    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:47:09.391246    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:47:09.401171    5132 logs.go:282] 0 containers: []
	W1209 16:47:09.401180    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:47:09.401236    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:47:09.411630    5132 logs.go:282] 0 containers: []
	W1209 16:47:09.411641    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:47:09.411662    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:47:09.411668    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:47:09.426653    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:47:09.426663    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:47:09.440016    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:47:09.440028    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:47:09.461791    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:47:09.461803    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:47:09.473569    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:47:09.473581    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:47:09.488297    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:47:09.488308    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:47:09.500020    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:47:09.500031    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:47:09.518184    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:47:09.518196    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:47:09.529925    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:47:09.529939    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:47:09.534154    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:47:09.534161    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:47:09.545008    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:47:09.545017    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:47:09.571849    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:47:09.571858    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:47:09.608410    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:47:09.608417    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:47:09.696986    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:47:09.696998    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:47:09.710892    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:47:09.710903    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:47:12.228391    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:47:17.230773    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:47:17.231176    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:47:17.267297    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:47:17.267448    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:47:17.287761    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:47:17.287883    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:47:17.302176    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:47:17.302250    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:47:17.315025    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:47:17.315118    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:47:17.326146    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:47:17.326217    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:47:17.337181    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:47:17.337257    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:47:17.347889    5132 logs.go:282] 0 containers: []
	W1209 16:47:17.347902    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:47:17.347960    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:47:17.358404    5132 logs.go:282] 0 containers: []
	W1209 16:47:17.358414    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:47:17.358421    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:47:17.358425    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:47:17.371297    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:47:17.371309    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:47:17.398207    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:47:17.398217    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:47:17.411286    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:47:17.411295    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:47:17.416343    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:47:17.416349    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:47:17.430746    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:47:17.430774    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:47:17.447352    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:47:17.447365    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:47:17.458913    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:47:17.458925    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:47:17.476974    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:47:17.476985    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:47:17.513833    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:47:17.513840    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:47:17.549483    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:47:17.549498    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:47:17.563179    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:47:17.563191    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:47:17.576284    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:47:17.576296    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:47:17.587421    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:47:17.587432    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:47:17.602962    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:47:17.602973    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:47:20.121266    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:47:25.123616    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:47:25.123904    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:47:25.151250    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:47:25.151374    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:47:25.169090    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:47:25.169173    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:47:25.182043    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:47:25.182123    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:47:25.193569    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:47:25.193660    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:47:25.203864    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:47:25.203926    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:47:25.214200    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:47:25.214276    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:47:25.223995    5132 logs.go:282] 0 containers: []
	W1209 16:47:25.224005    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:47:25.224063    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:47:25.234161    5132 logs.go:282] 0 containers: []
	W1209 16:47:25.234172    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:47:25.234179    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:47:25.234184    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:47:25.246931    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:47:25.246943    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:47:25.258587    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:47:25.258598    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:47:25.294181    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:47:25.294188    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:47:25.312953    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:47:25.312966    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:47:25.325535    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:47:25.325547    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:47:25.340687    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:47:25.340695    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:47:25.352650    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:47:25.352662    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:47:25.372460    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:47:25.372472    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:47:25.387466    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:47:25.387479    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:47:25.392256    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:47:25.392267    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:47:25.428129    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:47:25.428141    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:47:25.441660    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:47:25.441671    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:47:25.466743    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:47:25.466753    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:47:25.478409    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:47:25.478419    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:47:27.993872    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:47:32.996739    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:47:32.997289    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:47:33.037063    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:47:33.037211    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:47:33.059102    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:47:33.059224    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:47:33.075081    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:47:33.075161    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:47:33.091340    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:47:33.091418    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:47:33.102081    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:47:33.102157    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:47:33.113209    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:47:33.113286    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:47:33.123592    5132 logs.go:282] 0 containers: []
	W1209 16:47:33.123604    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:47:33.123664    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:47:33.134240    5132 logs.go:282] 0 containers: []
	W1209 16:47:33.134253    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:47:33.134261    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:47:33.134266    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:47:33.149640    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:47:33.149650    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:47:33.166585    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:47:33.166596    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:47:33.190918    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:47:33.190929    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:47:33.216891    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:47:33.216903    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:47:33.251628    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:47:33.251638    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:47:33.264131    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:47:33.264147    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:47:33.275839    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:47:33.275852    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:47:33.310234    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:47:33.310246    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:47:33.315102    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:47:33.315111    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:47:33.328973    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:47:33.328984    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:47:33.340928    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:47:33.340942    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:47:33.352545    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:47:33.352558    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:47:33.367032    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:47:33.367045    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:47:33.380837    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:47:33.380845    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:47:35.894199    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:47:40.896623    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:47:40.897131    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:47:40.936959    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:47:40.937110    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:47:40.959151    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:47:40.959287    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:47:40.974785    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:47:40.974860    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:47:40.987345    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:47:40.987413    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:47:40.998168    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:47:40.998244    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:47:41.009109    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:47:41.009180    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:47:41.019480    5132 logs.go:282] 0 containers: []
	W1209 16:47:41.019492    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:47:41.019556    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:47:41.030105    5132 logs.go:282] 0 containers: []
	W1209 16:47:41.030118    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:47:41.030127    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:47:41.030133    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:47:41.041977    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:47:41.041990    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:47:41.057173    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:47:41.057186    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:47:41.075769    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:47:41.075782    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:47:41.113114    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:47:41.113122    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:47:41.127329    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:47:41.127340    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:47:41.139205    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:47:41.139216    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:47:41.151167    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:47:41.151183    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:47:41.164479    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:47:41.164492    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:47:41.175865    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:47:41.175876    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:47:41.180539    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:47:41.180548    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:47:41.215224    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:47:41.215233    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:47:41.227457    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:47:41.227466    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:47:41.251949    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:47:41.251956    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:47:41.265946    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:47:41.265959    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:47:43.781563    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:47:48.784216    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:47:48.784682    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:47:48.820720    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:47:48.820883    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:47:48.841105    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:47:48.841223    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:47:48.856958    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:47:48.857055    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:47:48.869055    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:47:48.869126    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:47:48.880110    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:47:48.880183    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:47:48.890891    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:47:48.890954    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:47:48.901704    5132 logs.go:282] 0 containers: []
	W1209 16:47:48.901716    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:47:48.901769    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:47:48.911949    5132 logs.go:282] 0 containers: []
	W1209 16:47:48.911964    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:47:48.911971    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:47:48.911977    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:47:48.949802    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:47:48.949814    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:47:48.961279    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:47:48.961289    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:47:48.972560    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:47:48.972568    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:47:48.977375    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:47:48.977382    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:47:48.990924    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:47:48.990936    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:47:49.007831    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:47:49.007845    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:47:49.025318    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:47:49.025332    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:47:49.050965    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:47:49.050973    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:47:49.087260    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:47:49.087268    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:47:49.100461    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:47:49.100475    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:47:49.111924    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:47:49.111936    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:47:49.124117    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:47:49.124130    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:47:49.139685    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:47:49.139697    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:47:49.151373    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:47:49.151386    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:47:51.667507    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:47:56.670268    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:47:56.670830    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:47:56.707855    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:47:56.708004    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:47:56.731076    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:47:56.731184    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:47:56.746761    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:47:56.746827    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:47:56.759334    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:47:56.759413    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:47:56.770047    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:47:56.770126    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:47:56.780515    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:47:56.780589    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:47:56.791055    5132 logs.go:282] 0 containers: []
	W1209 16:47:56.791065    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:47:56.791128    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:47:56.801176    5132 logs.go:282] 0 containers: []
	W1209 16:47:56.801189    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:47:56.801197    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:47:56.801203    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:47:56.827149    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:47:56.827157    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:47:56.838813    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:47:56.838828    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:47:56.873795    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:47:56.873805    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:47:56.889651    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:47:56.889665    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:47:56.903707    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:47:56.903717    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:47:56.915669    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:47:56.915678    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:47:56.933475    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:47:56.933486    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:47:56.945000    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:47:56.945010    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:47:56.980147    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:47:56.980156    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:47:56.994667    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:47:56.994679    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:47:57.006288    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:47:57.006299    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:47:57.021022    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:47:57.021033    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:47:57.025628    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:47:57.025637    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:47:57.039580    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:47:57.039588    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:47:59.555098    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:48:04.557891    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:48:04.558322    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:48:04.591599    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:48:04.591749    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:48:04.611246    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:48:04.611364    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:48:04.627951    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:48:04.628028    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:48:04.641191    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:48:04.641267    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:48:04.651462    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:48:04.651528    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:48:04.661688    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:48:04.661757    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:48:04.671832    5132 logs.go:282] 0 containers: []
	W1209 16:48:04.671848    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:48:04.671913    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:48:04.682029    5132 logs.go:282] 0 containers: []
	W1209 16:48:04.682041    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:48:04.682048    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:48:04.682054    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:48:04.707656    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:48:04.707667    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:48:04.742829    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:48:04.742841    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:48:04.760378    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:48:04.760389    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:48:04.776332    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:48:04.776344    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:48:04.788321    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:48:04.788330    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:48:04.808667    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:48:04.808680    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:48:04.820621    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:48:04.820633    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:48:04.838845    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:48:04.838854    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:48:04.851800    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:48:04.851813    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:48:04.863175    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:48:04.863189    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:48:04.874513    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:48:04.874525    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:48:04.892102    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:48:04.892116    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:48:04.903659    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:48:04.903673    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:48:04.939175    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:48:04.939184    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:48:07.445126    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:48:12.446140    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:48:12.446268    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:48:12.458053    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:48:12.458127    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:48:12.468818    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:48:12.468884    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:48:12.481315    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:48:12.481394    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:48:12.493519    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:48:12.493601    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:48:12.511254    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:48:12.511330    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:48:12.526707    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:48:12.526778    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:48:12.549972    5132 logs.go:282] 0 containers: []
	W1209 16:48:12.549989    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:48:12.550054    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:48:12.566746    5132 logs.go:282] 0 containers: []
	W1209 16:48:12.566761    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:48:12.566769    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:48:12.566775    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:48:12.583040    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:48:12.583055    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:48:12.597620    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:48:12.597631    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:48:12.608966    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:48:12.608975    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:48:12.646857    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:48:12.646871    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:48:12.651549    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:48:12.651556    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:48:12.666978    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:48:12.666988    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:48:12.679078    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:48:12.679093    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:48:12.691113    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:48:12.691128    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:48:12.725913    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:48:12.725924    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:48:12.745705    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:48:12.745716    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:48:12.757565    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:48:12.757576    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:48:12.771066    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:48:12.771079    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:48:12.790731    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:48:12.790741    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:48:12.802188    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:48:12.802199    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:48:15.329633    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:48:20.330536    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:48:20.330655    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:48:20.343816    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:48:20.343897    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:48:20.358924    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:48:20.359002    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:48:20.371330    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:48:20.371404    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:48:20.381757    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:48:20.381824    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:48:20.392416    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:48:20.392489    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:48:20.402911    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:48:20.402986    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:48:20.413338    5132 logs.go:282] 0 containers: []
	W1209 16:48:20.413351    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:48:20.413405    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:48:20.423708    5132 logs.go:282] 0 containers: []
	W1209 16:48:20.423720    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:48:20.423728    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:48:20.423734    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:48:20.436052    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:48:20.436065    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:48:20.454274    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:48:20.454286    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:48:20.467739    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:48:20.467751    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:48:20.483669    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:48:20.483678    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:48:20.495239    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:48:20.495251    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:48:20.533531    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:48:20.533538    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:48:20.537943    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:48:20.537949    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:48:20.549386    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:48:20.549397    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:48:20.564279    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:48:20.564288    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:48:20.579633    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:48:20.579641    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:48:20.601698    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:48:20.601710    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:48:20.616188    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:48:20.616199    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:48:20.652132    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:48:20.652143    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:48:20.675568    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:48:20.675579    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:48:23.204213    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:48:28.206465    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:48:28.206649    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:48:28.222995    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:48:28.223085    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:48:28.234652    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:48:28.234740    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:48:28.272647    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:48:28.272727    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:48:28.283965    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:48:28.284035    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:48:28.295449    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:48:28.295525    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:48:28.306528    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:48:28.306596    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:48:28.316971    5132 logs.go:282] 0 containers: []
	W1209 16:48:28.316984    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:48:28.317048    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:48:28.327376    5132 logs.go:282] 0 containers: []
	W1209 16:48:28.327387    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:48:28.327394    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:48:28.327401    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:48:28.343714    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:48:28.343728    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:48:28.381773    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:48:28.381783    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:48:28.418808    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:48:28.418820    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:48:28.432661    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:48:28.432675    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:48:28.447115    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:48:28.447125    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:48:28.471308    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:48:28.471316    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:48:28.490405    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:48:28.490418    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:48:28.503191    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:48:28.503201    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:48:28.507589    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:48:28.507598    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:48:28.527548    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:48:28.527563    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:48:28.545587    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:48:28.545601    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:48:28.559417    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:48:28.559428    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:48:28.571470    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:48:28.571483    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:48:28.583058    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:48:28.583068    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:48:31.097316    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:48:36.099698    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:48:36.099844    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:48:36.112366    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:48:36.112450    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:48:36.124520    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:48:36.124606    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:48:36.136525    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:48:36.136616    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:48:36.148080    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:48:36.148171    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:48:36.159880    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:48:36.159961    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:48:36.171383    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:48:36.171495    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:48:36.183672    5132 logs.go:282] 0 containers: []
	W1209 16:48:36.183682    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:48:36.183752    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:48:36.194617    5132 logs.go:282] 0 containers: []
	W1209 16:48:36.194631    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:48:36.194639    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:48:36.194646    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:48:36.213616    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:48:36.213636    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:48:36.235126    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:48:36.235140    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:48:36.274417    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:48:36.274438    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:48:36.279540    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:48:36.279551    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:48:36.306334    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:48:36.306349    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:48:36.319775    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:48:36.319791    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:48:36.344581    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:48:36.344598    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:48:36.358374    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:48:36.358384    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:48:36.397134    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:48:36.397146    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:48:36.415010    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:48:36.415029    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:48:36.433200    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:48:36.433215    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:48:36.447235    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:48:36.447252    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:48:36.460586    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:48:36.460600    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:48:36.476962    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:48:36.476976    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:48:38.993351    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:48:43.995641    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:48:43.995802    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:48:44.011449    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:48:44.011542    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:48:44.024726    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:48:44.024811    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:48:44.035659    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:48:44.035727    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:48:44.047741    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:48:44.047821    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:48:44.058360    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:48:44.058438    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:48:44.069231    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:48:44.069312    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:48:44.080246    5132 logs.go:282] 0 containers: []
	W1209 16:48:44.080259    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:48:44.080320    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:48:44.090509    5132 logs.go:282] 0 containers: []
	W1209 16:48:44.090530    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:48:44.090564    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:48:44.090579    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:48:44.104480    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:48:44.104494    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:48:44.115918    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:48:44.115927    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:48:44.131040    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:48:44.131050    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:48:44.142799    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:48:44.142813    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:48:44.154562    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:48:44.154571    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:48:44.191148    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:48:44.191159    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:48:44.205580    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:48:44.205590    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:48:44.217136    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:48:44.217150    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:48:44.229511    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:48:44.229521    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:48:44.241810    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:48:44.241821    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:48:44.246028    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:48:44.246033    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:48:44.260578    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:48:44.260592    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:48:44.278161    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:48:44.278169    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:48:44.303398    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:48:44.303407    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:48:46.841788    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:48:51.844195    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:48:51.844381    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:48:51.855880    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:48:51.855960    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:48:51.874585    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:48:51.874668    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:48:51.887542    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:48:51.887622    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:48:51.898375    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:48:51.898451    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:48:51.911630    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:48:51.911712    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:48:51.923015    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:48:51.923095    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:48:51.934619    5132 logs.go:282] 0 containers: []
	W1209 16:48:51.934641    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:48:51.934705    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:48:51.945299    5132 logs.go:282] 0 containers: []
	W1209 16:48:51.945310    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:48:51.945318    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:48:51.945324    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:48:51.959712    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:48:51.959728    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:48:51.980257    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:48:51.980271    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:48:51.992146    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:48:51.992160    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:48:52.027053    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:48:52.027063    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:48:52.039525    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:48:52.039537    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:48:52.056357    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:48:52.056371    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:48:52.076162    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:48:52.076178    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:48:52.104335    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:48:52.104355    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:48:52.144309    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:48:52.144328    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:48:52.149555    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:48:52.149567    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:48:52.163988    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:48:52.164004    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:48:52.177079    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:48:52.177092    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:48:52.197168    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:48:52.197180    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:48:52.209089    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:48:52.209101    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:48:54.725086    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:48:59.727058    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:48:59.727278    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:48:59.739180    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:48:59.739271    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:48:59.749808    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:48:59.749886    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:48:59.760032    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:48:59.760105    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:48:59.770372    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:48:59.770465    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:48:59.780623    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:48:59.780701    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:48:59.795090    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:48:59.795172    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:48:59.807625    5132 logs.go:282] 0 containers: []
	W1209 16:48:59.807635    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:48:59.807699    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:48:59.818453    5132 logs.go:282] 0 containers: []
	W1209 16:48:59.818466    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:48:59.818475    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:48:59.818483    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:48:59.822737    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:48:59.822746    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:48:59.856735    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:48:59.856751    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:48:59.871305    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:48:59.871316    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:48:59.884205    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:48:59.884217    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:48:59.898891    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:48:59.898906    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:48:59.910389    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:48:59.910402    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:48:59.946719    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:48:59.946729    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:48:59.959398    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:48:59.959411    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:48:59.971116    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:48:59.971130    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:48:59.996859    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:48:59.996867    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:49:00.011153    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:49:00.011170    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:49:00.026198    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:49:00.026208    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:49:00.042918    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:49:00.042930    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:49:00.055019    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:49:00.055028    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:49:02.568399    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:07.571323    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:07.572077    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:49:07.620356    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:49:07.620507    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:49:07.639294    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:49:07.639397    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:49:07.653380    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:49:07.653460    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:49:07.665494    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:49:07.665581    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:49:07.676750    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:49:07.676823    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:49:07.687507    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:49:07.687581    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:49:07.697547    5132 logs.go:282] 0 containers: []
	W1209 16:49:07.697558    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:49:07.697616    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:49:07.714758    5132 logs.go:282] 0 containers: []
	W1209 16:49:07.714771    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:49:07.714782    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:49:07.714788    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:49:07.753055    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:49:07.753064    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:49:07.796353    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:49:07.796365    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:49:07.810930    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:49:07.810943    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:49:07.822655    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:49:07.822667    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:49:07.840455    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:49:07.840468    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:49:07.845109    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:49:07.845118    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:49:07.858617    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:49:07.858628    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:49:07.874276    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:49:07.874290    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:49:07.888674    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:49:07.888687    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:49:07.905361    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:49:07.905374    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:49:07.920214    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:49:07.920226    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:49:07.931502    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:49:07.931514    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:49:07.943222    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:49:07.943233    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:49:07.954717    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:49:07.954730    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:49:10.481944    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:15.484662    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:15.484785    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:49:15.496209    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:49:15.496286    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:49:15.507747    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:49:15.507815    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:49:15.518817    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:49:15.518893    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:49:15.529725    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:49:15.529807    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:49:15.540447    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:49:15.540522    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:49:15.553825    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:49:15.553900    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:49:15.564941    5132 logs.go:282] 0 containers: []
	W1209 16:49:15.564953    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:49:15.565018    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:49:15.575012    5132 logs.go:282] 0 containers: []
	W1209 16:49:15.575029    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:49:15.575037    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:49:15.575043    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:49:15.613760    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:49:15.613769    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:49:15.627679    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:49:15.627691    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:49:15.640848    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:49:15.640859    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:49:15.654218    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:49:15.654228    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:49:15.658888    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:49:15.658900    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:49:15.697991    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:49:15.698006    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:49:15.715956    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:49:15.715967    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:49:15.729712    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:49:15.729726    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:49:15.745048    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:49:15.745061    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:49:15.757270    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:49:15.757279    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:49:15.781006    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:49:15.781012    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:49:15.792918    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:49:15.792928    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:49:15.807378    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:49:15.807391    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:49:15.819734    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:49:15.819745    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:49:18.332830    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:23.335004    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:23.335257    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:49:23.358677    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:49:23.358805    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:49:23.372208    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:49:23.372283    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:49:23.383783    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:49:23.383862    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:49:23.394076    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:49:23.394144    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:49:23.404224    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:49:23.404305    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:49:23.414687    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:49:23.414755    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:49:23.424838    5132 logs.go:282] 0 containers: []
	W1209 16:49:23.424847    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:49:23.424905    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:49:23.434890    5132 logs.go:282] 0 containers: []
	W1209 16:49:23.434902    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:49:23.434911    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:49:23.434918    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:49:23.472567    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:49:23.472575    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:49:23.507907    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:49:23.507922    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:49:23.524047    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:49:23.524061    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:49:23.535366    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:49:23.535377    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:49:23.551240    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:49:23.551254    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:49:23.565561    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:49:23.565572    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:49:23.588455    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:49:23.588461    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:49:23.604506    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:49:23.604519    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:49:23.616085    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:49:23.616096    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:49:23.634481    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:49:23.634490    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:49:23.639206    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:49:23.639213    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:49:23.654006    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:49:23.654018    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:49:23.667429    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:49:23.667442    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:49:23.679042    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:49:23.679055    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:49:26.193441    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:31.194112    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:31.194228    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:49:31.205884    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:49:31.205969    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:49:31.219805    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:49:31.219892    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:49:31.231783    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:49:31.231856    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:49:31.243426    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:49:31.243507    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:49:31.255446    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:49:31.255529    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:49:31.267481    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:49:31.267561    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:49:31.283387    5132 logs.go:282] 0 containers: []
	W1209 16:49:31.283401    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:49:31.283474    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:49:31.294706    5132 logs.go:282] 0 containers: []
	W1209 16:49:31.294721    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:49:31.294729    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:49:31.294740    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:49:31.335288    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:49:31.335301    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:49:31.349732    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:49:31.349751    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:49:31.362511    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:49:31.362526    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:49:31.390974    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:49:31.390994    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:49:31.428699    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:49:31.428718    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:49:31.447159    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:49:31.447177    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:49:31.460105    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:49:31.460117    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:49:31.481561    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:49:31.481579    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:49:31.494830    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:49:31.494841    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:49:31.509976    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:49:31.509989    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:49:31.514614    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:49:31.514625    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:49:31.530300    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:49:31.530317    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:49:31.543355    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:49:31.543367    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:49:31.562422    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:49:31.562436    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:49:34.078467    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:39.080768    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:39.081279    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:49:39.121181    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:49:39.121351    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:49:39.143686    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:49:39.143814    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:49:39.166728    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:49:39.166817    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:49:39.177779    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:49:39.177860    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:49:39.187979    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:49:39.188063    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:49:39.198839    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:49:39.198916    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:49:39.209425    5132 logs.go:282] 0 containers: []
	W1209 16:49:39.209436    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:49:39.209499    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:49:39.219438    5132 logs.go:282] 0 containers: []
	W1209 16:49:39.219451    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:49:39.219460    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:49:39.219466    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:49:39.224087    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:49:39.224095    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:49:39.236180    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:49:39.236193    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:49:39.248606    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:49:39.248620    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:49:39.282485    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:49:39.282501    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:49:39.296808    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:49:39.296820    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:49:39.309824    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:49:39.309835    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:49:39.324465    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:49:39.324477    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:49:39.349668    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:49:39.349683    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:49:39.362010    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:49:39.362024    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:49:39.376259    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:49:39.376275    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:49:39.388798    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:49:39.388813    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:49:39.408305    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:49:39.408321    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:49:39.445468    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:49:39.445483    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:49:39.464293    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:49:39.464308    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:49:41.977731    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:46.980058    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:46.980351    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:49:47.005134    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:49:47.005234    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:49:47.020844    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:49:47.020934    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:49:47.033691    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:49:47.033775    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:49:47.044876    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:49:47.044953    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:49:47.055684    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:49:47.055761    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:49:47.066098    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:49:47.066172    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:49:47.076480    5132 logs.go:282] 0 containers: []
	W1209 16:49:47.076491    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:49:47.076551    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:49:47.089506    5132 logs.go:282] 0 containers: []
	W1209 16:49:47.089517    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:49:47.089526    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:49:47.089532    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:49:47.126430    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:49:47.126438    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:49:47.162103    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:49:47.162114    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:49:47.175046    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:49:47.175057    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:49:47.187155    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:49:47.187169    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:49:47.201705    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:49:47.201719    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:49:47.218849    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:49:47.218859    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:49:47.237648    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:49:47.237659    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:49:47.251315    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:49:47.251330    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:49:47.262985    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:49:47.262995    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:49:47.281377    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:49:47.281391    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:49:47.285928    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:49:47.285936    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:49:47.303553    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:49:47.303563    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:49:47.321049    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:49:47.321059    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:49:47.343397    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:49:47.343404    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:49:49.856283    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:54.858546    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:54.858709    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:49:54.872891    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:49:54.872976    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:49:54.884049    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:49:54.884134    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:49:54.894876    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:49:54.894953    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:49:54.905717    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:49:54.905801    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:49:54.916624    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:49:54.916705    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:49:54.927185    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:49:54.927264    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:49:54.938164    5132 logs.go:282] 0 containers: []
	W1209 16:49:54.938176    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:49:54.938246    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:49:54.949605    5132 logs.go:282] 0 containers: []
	W1209 16:49:54.949616    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:49:54.949623    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:49:54.949630    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:49:54.955239    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:49:54.955252    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:49:54.994969    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:49:54.994988    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:49:55.010592    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:49:55.010605    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:49:55.029998    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:49:55.030014    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:49:55.042770    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:49:55.042783    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:49:55.057177    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:49:55.057190    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:49:55.073151    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:49:55.073168    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:49:55.088389    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:49:55.088411    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:49:55.111804    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:49:55.111820    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:49:55.125358    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:49:55.125374    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:49:55.138821    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:49:55.138833    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:49:55.152033    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:49:55.152048    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:49:55.193234    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:49:55.193254    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:49:55.207790    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:49:55.207803    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:49:57.734825    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:02.737306    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:02.737544    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:50:02.759712    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:50:02.759843    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:50:02.774678    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:50:02.774759    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:50:02.786685    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:50:02.786767    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:50:02.796836    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:50:02.796911    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:50:02.806594    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:50:02.806660    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:50:02.817697    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:50:02.817779    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:50:02.827710    5132 logs.go:282] 0 containers: []
	W1209 16:50:02.827724    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:50:02.827790    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:50:02.837737    5132 logs.go:282] 0 containers: []
	W1209 16:50:02.837750    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:50:02.837759    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:50:02.837765    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:50:02.849003    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:50:02.849014    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:50:02.872540    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:50:02.872547    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:50:02.909152    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:50:02.909160    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:50:02.926179    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:50:02.926194    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:50:02.941790    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:50:02.941800    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:50:02.956042    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:50:02.956052    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:50:02.969126    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:50:02.969139    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:50:02.980371    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:50:02.980382    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:50:02.985110    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:50:02.985118    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:50:02.998980    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:50:02.998990    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:50:03.010845    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:50:03.010856    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:50:03.022691    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:50:03.022701    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:50:03.035154    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:50:03.035165    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:50:03.070276    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:50:03.070288    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:50:05.586446    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:10.588711    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:10.588797    5132 kubeadm.go:597] duration metric: took 4m3.574452209s to restartPrimaryControlPlane
	W1209 16:50:10.588857    5132 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 16:50:10.588888    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1209 16:50:11.536596    5132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 16:50:11.541693    5132 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 16:50:11.544665    5132 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 16:50:11.547613    5132 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 16:50:11.547620    5132 kubeadm.go:157] found existing configuration files:
	
	I1209 16:50:11.547652    5132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/admin.conf
	I1209 16:50:11.550401    5132 kubeadm.go:163] "https://control-plane.minikube.internal:64988" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 16:50:11.550431    5132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 16:50:11.553101    5132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/kubelet.conf
	I1209 16:50:11.556090    5132 kubeadm.go:163] "https://control-plane.minikube.internal:64988" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 16:50:11.556120    5132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 16:50:11.559164    5132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/controller-manager.conf
	I1209 16:50:11.561639    5132 kubeadm.go:163] "https://control-plane.minikube.internal:64988" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 16:50:11.561672    5132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 16:50:11.564374    5132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/scheduler.conf
	I1209 16:50:11.567341    5132 kubeadm.go:163] "https://control-plane.minikube.internal:64988" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 16:50:11.567370    5132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 16:50:11.570271    5132 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 16:50:11.586299    5132 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1209 16:50:11.586328    5132 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 16:50:11.645586    5132 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 16:50:11.645648    5132 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 16:50:11.645708    5132 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 16:50:11.701707    5132 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 16:50:11.708834    5132 out.go:235]   - Generating certificates and keys ...
	I1209 16:50:11.708865    5132 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 16:50:11.708897    5132 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 16:50:11.708949    5132 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 16:50:11.708986    5132 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 16:50:11.709020    5132 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 16:50:11.709055    5132 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 16:50:11.709097    5132 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 16:50:11.709125    5132 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 16:50:11.709165    5132 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 16:50:11.709213    5132 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 16:50:11.709235    5132 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 16:50:11.709264    5132 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 16:50:11.781954    5132 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 16:50:11.886600    5132 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 16:50:12.023670    5132 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 16:50:12.143293    5132 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 16:50:12.176881    5132 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 16:50:12.177299    5132 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 16:50:12.177338    5132 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 16:50:12.265565    5132 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 16:50:12.269233    5132 out.go:235]   - Booting up control plane ...
	I1209 16:50:12.269278    5132 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 16:50:12.269311    5132 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 16:50:12.269342    5132 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 16:50:12.269377    5132 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 16:50:12.269450    5132 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 16:50:17.269775    5132 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.004245 seconds
	I1209 16:50:17.269890    5132 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 16:50:17.277123    5132 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 16:50:17.803713    5132 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 16:50:17.804047    5132 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-688000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 16:50:18.310462    5132 kubeadm.go:310] [bootstrap-token] Using token: x6vu92.efyexjcrt0uobog7
	I1209 16:50:18.313291    5132 out.go:235]   - Configuring RBAC rules ...
	I1209 16:50:18.313406    5132 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 16:50:18.314189    5132 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 16:50:18.320385    5132 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 16:50:18.321955    5132 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 16:50:18.323655    5132 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 16:50:18.325402    5132 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 16:50:18.331084    5132 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 16:50:18.493298    5132 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 16:50:18.716470    5132 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 16:50:18.716819    5132 kubeadm.go:310] 
	I1209 16:50:18.716855    5132 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 16:50:18.716861    5132 kubeadm.go:310] 
	I1209 16:50:18.716905    5132 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 16:50:18.716944    5132 kubeadm.go:310] 
	I1209 16:50:18.716965    5132 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 16:50:18.716997    5132 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 16:50:18.717022    5132 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 16:50:18.717025    5132 kubeadm.go:310] 
	I1209 16:50:18.717063    5132 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 16:50:18.717065    5132 kubeadm.go:310] 
	I1209 16:50:18.717092    5132 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 16:50:18.717096    5132 kubeadm.go:310] 
	I1209 16:50:18.717121    5132 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 16:50:18.717167    5132 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 16:50:18.717211    5132 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 16:50:18.717213    5132 kubeadm.go:310] 
	I1209 16:50:18.717254    5132 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 16:50:18.717305    5132 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 16:50:18.717317    5132 kubeadm.go:310] 
	I1209 16:50:18.717365    5132 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x6vu92.efyexjcrt0uobog7 \
	I1209 16:50:18.717417    5132 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb7b4eec38a0897ce971e2bba2a6b79ec587773d147d857ca417d407ce72cb1f \
	I1209 16:50:18.717430    5132 kubeadm.go:310] 	--control-plane 
	I1209 16:50:18.717433    5132 kubeadm.go:310] 
	I1209 16:50:18.717473    5132 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 16:50:18.717475    5132 kubeadm.go:310] 
	I1209 16:50:18.717512    5132 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x6vu92.efyexjcrt0uobog7 \
	I1209 16:50:18.717585    5132 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb7b4eec38a0897ce971e2bba2a6b79ec587773d147d857ca417d407ce72cb1f 
	I1209 16:50:18.717635    5132 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 16:50:18.717642    5132 cni.go:84] Creating CNI manager for ""
	I1209 16:50:18.717651    5132 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:50:18.720434    5132 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 16:50:18.726385    5132 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 16:50:18.729292    5132 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 16:50:18.734143    5132 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 16:50:18.734196    5132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 16:50:18.734199    5132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-688000 minikube.k8s.io/updated_at=2024_12_09T16_50_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=running-upgrade-688000 minikube.k8s.io/primary=true
	I1209 16:50:18.780564    5132 ops.go:34] apiserver oom_adj: -16
	I1209 16:50:18.780562    5132 kubeadm.go:1113] duration metric: took 46.410958ms to wait for elevateKubeSystemPrivileges
	I1209 16:50:18.780685    5132 kubeadm.go:394] duration metric: took 4m11.779973583s to StartCluster
	I1209 16:50:18.780698    5132 settings.go:142] acquiring lock: {Name:mk6085b49e250ce3863979186260a283889e4dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:50:18.780802    5132 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:50:18.781212    5132 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/kubeconfig: {Name:mk5092322010dd3bee2f23e3f2812067ca57270f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:50:18.781437    5132 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:50:18.781450    5132 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 16:50:18.781485    5132 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-688000"
	I1209 16:50:18.781493    5132 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-688000"
	W1209 16:50:18.781497    5132 addons.go:243] addon storage-provisioner should already be in state true
	I1209 16:50:18.781512    5132 host.go:66] Checking if "running-upgrade-688000" exists ...
	I1209 16:50:18.781532    5132 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-688000"
	I1209 16:50:18.781554    5132 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-688000"
	I1209 16:50:18.781611    5132 config.go:182] Loaded profile config "running-upgrade-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:50:18.782633    5132 kapi.go:59] client config for running-upgrade-688000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/client.key", CAFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10484b740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 16:50:18.782971    5132 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-688000"
	W1209 16:50:18.782976    5132 addons.go:243] addon default-storageclass should already be in state true
	I1209 16:50:18.782984    5132 host.go:66] Checking if "running-upgrade-688000" exists ...
	I1209 16:50:18.785391    5132 out.go:177] * Verifying Kubernetes components...
	I1209 16:50:18.785792    5132 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 16:50:18.791679    5132 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 16:50:18.791687    5132 sshutil.go:53] new ssh client: &{IP:localhost Port:64956 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/running-upgrade-688000/id_rsa Username:docker}
	I1209 16:50:18.795386    5132 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:50:18.799378    5132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:50:18.803373    5132 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 16:50:18.803380    5132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 16:50:18.803387    5132 sshutil.go:53] new ssh client: &{IP:localhost Port:64956 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/running-upgrade-688000/id_rsa Username:docker}
	I1209 16:50:18.894912    5132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 16:50:18.899774    5132 api_server.go:52] waiting for apiserver process to appear ...
	I1209 16:50:18.899832    5132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 16:50:18.904052    5132 api_server.go:72] duration metric: took 122.60525ms to wait for apiserver process to appear ...
	I1209 16:50:18.904060    5132 api_server.go:88] waiting for apiserver healthz status ...
	I1209 16:50:18.904066    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:18.926372    5132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 16:50:18.945000    5132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 16:50:19.270446    5132 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 16:50:19.270456    5132 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 16:50:23.906145    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:23.906209    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:28.906955    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:28.907000    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:33.907569    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:33.907592    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:38.908196    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:38.908226    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:43.909055    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:43.909118    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:48.910307    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:48.910367    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1209 16:50:49.270922    5132 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1209 16:50:49.275224    5132 out.go:177] * Enabled addons: storage-provisioner
	I1209 16:50:49.286995    5132 addons.go:510] duration metric: took 30.505661166s for enable addons: enabled=[storage-provisioner]
	I1209 16:50:53.911707    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:53.911741    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:58.913804    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:58.913827    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:03.916020    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:03.916062    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:08.918342    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:08.918365    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:13.920642    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:13.920693    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:18.923040    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:18.923255    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:18.934478    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:51:18.934556    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:18.945380    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:51:18.945459    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:18.956158    5132 logs.go:282] 2 containers: [02d8d43dfbea 64247a147667]
	I1209 16:51:18.956241    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:18.967212    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:51:18.967277    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:18.977860    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:51:18.977931    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:18.988738    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:51:18.988815    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:19.003066    5132 logs.go:282] 0 containers: []
	W1209 16:51:19.003078    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:19.003171    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:19.013761    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:51:19.013779    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:51:19.013784    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:51:19.025949    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:51:19.025961    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:51:19.047878    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:51:19.047890    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:51:19.062221    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:51:19.062232    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:51:19.077821    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:19.077831    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:19.116374    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:51:19.116386    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:51:19.131173    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:51:19.131187    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:51:19.142595    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:51:19.142610    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:51:19.161167    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:51:19.161178    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:51:19.177589    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:19.177598    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:19.201320    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:19.201328    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:51:19.235232    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:51:19.235330    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:51:19.236458    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:19.236464    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:19.241848    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:51:19.241856    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:19.253335    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:51:19.253351    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:51:19.253377    5132 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 16:51:19.253381    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:51:19.253384    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:51:19.253392    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:51:19.253395    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:51:29.257539    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:34.259976    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:34.260218    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:34.296265    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:51:34.296391    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:34.313509    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:51:34.313598    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:34.326717    5132 logs.go:282] 2 containers: [02d8d43dfbea 64247a147667]
	I1209 16:51:34.326802    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:34.338347    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:51:34.338418    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:34.349655    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:51:34.349725    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:34.361055    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:51:34.361129    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:34.374071    5132 logs.go:282] 0 containers: []
	W1209 16:51:34.374081    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:34.374146    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:34.385898    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:51:34.385916    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:51:34.385922    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:51:34.400883    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:34.400896    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:34.426096    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:51:34.426107    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:34.437915    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:34.437925    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:34.444374    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:34.444385    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:34.490406    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:51:34.490421    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:51:34.505106    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:51:34.505117    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:51:34.521777    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:51:34.521788    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:51:34.533890    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:51:34.533906    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:51:34.548722    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:51:34.548732    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:51:34.561050    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:51:34.561059    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:51:34.586512    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:34.586526    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:51:34.621478    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:51:34.621575    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:51:34.622728    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:51:34.622742    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:51:34.636791    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:51:34.636802    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:51:34.636829    5132 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 16:51:34.636836    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:51:34.636840    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:51:34.636847    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:51:34.636849    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:51:44.640951    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:49.643318    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:49.643486    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:49.658180    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:51:49.658263    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:49.670073    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:51:49.670151    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:49.681370    5132 logs.go:282] 2 containers: [02d8d43dfbea 64247a147667]
	I1209 16:51:49.681444    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:49.692991    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:51:49.693063    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:49.704742    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:51:49.704824    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:49.716452    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:51:49.716525    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:49.727617    5132 logs.go:282] 0 containers: []
	W1209 16:51:49.727630    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:49.727694    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:49.738568    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:51:49.738583    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:49.738588    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:49.742953    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:49.742963    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:49.778803    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:51:49.778813    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:51:49.791043    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:51:49.791056    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:51:49.806897    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:49.806910    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:49.831411    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:49.831421    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:51:49.865392    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:51:49.865487    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:51:49.866582    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:51:49.866590    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:51:49.881270    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:51:49.881284    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:51:49.896403    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:51:49.896416    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:51:49.909491    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:51:49.909503    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:51:49.925726    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:51:49.925740    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:51:49.943895    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:51:49.943904    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:51:49.956057    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:51:49.956069    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:49.968433    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:51:49.968443    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:51:49.968469    5132 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 16:51:49.968473    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:51:49.968476    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:51:49.968479    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:51:49.968483    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:51:59.972620    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:04.974924    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:04.975027    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:04.986424    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:52:04.986509    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:04.997550    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:52:04.997640    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:05.009220    5132 logs.go:282] 2 containers: [02d8d43dfbea 64247a147667]
	I1209 16:52:05.009300    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:05.020761    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:52:05.020845    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:05.031983    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:52:05.032064    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:05.043382    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:52:05.043462    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:05.053839    5132 logs.go:282] 0 containers: []
	W1209 16:52:05.053849    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:05.053911    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:05.064645    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:52:05.064659    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:52:05.064665    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:52:05.079249    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:52:05.079260    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:52:05.093941    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:52:05.093955    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:52:05.105702    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:05.105716    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:05.130079    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:05.130089    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:52:05.164005    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:52:05.164098    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:52:05.165226    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:05.165231    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:05.169609    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:05.169619    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:05.204130    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:52:05.204142    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:52:05.221797    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:52:05.221806    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:52:05.233343    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:52:05.233355    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:05.244693    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:52:05.244702    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:52:05.259422    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:52:05.259432    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:52:05.275429    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:52:05.275439    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:52:05.291497    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:52:05.291506    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:52:05.291532    5132 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 16:52:05.291537    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:52:05.291540    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:52:05.291544    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:52:05.291547    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:52:15.295754    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:20.298413    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:20.298688    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:20.322368    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:52:20.322484    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:20.337950    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:52:20.338032    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:20.350755    5132 logs.go:282] 2 containers: [02d8d43dfbea 64247a147667]
	I1209 16:52:20.350840    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:20.374238    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:52:20.374307    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:20.384795    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:52:20.384882    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:20.394961    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:52:20.395035    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:20.404954    5132 logs.go:282] 0 containers: []
	W1209 16:52:20.404963    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:20.405023    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:20.415873    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:52:20.415896    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:20.415903    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:52:20.448531    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:52:20.448623    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:52:20.449764    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:20.449772    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:20.454236    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:20.454246    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:20.494044    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:52:20.494057    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:52:20.509551    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:52:20.509567    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:52:20.525375    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:52:20.525384    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:52:20.537674    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:20.537685    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:20.563456    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:52:20.563468    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:52:20.577099    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:52:20.577110    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:52:20.594891    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:52:20.594903    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:52:20.607765    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:52:20.607776    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:52:20.628618    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:52:20.628631    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:52:20.641029    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:52:20.641042    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:20.654273    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:52:20.654285    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:52:20.654314    5132 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 16:52:20.654320    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:52:20.654323    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:52:20.654330    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:52:20.654334    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:52:30.658444    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:35.660732    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:35.661292    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:35.699229    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:52:35.699385    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:35.719430    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:52:35.719530    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:35.741596    5132 logs.go:282] 4 containers: [7374e35c93c9 f86fa2ef9959 02d8d43dfbea 64247a147667]
	I1209 16:52:35.741674    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:35.752853    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:52:35.752933    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:35.764060    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:52:35.764128    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:35.779138    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:52:35.779225    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:35.790023    5132 logs.go:282] 0 containers: []
	W1209 16:52:35.790033    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:35.790101    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:35.800921    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:52:35.800938    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:52:35.800944    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:52:35.818659    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:52:35.818672    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:52:35.832960    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:52:35.832970    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:52:35.845355    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:52:35.845366    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:35.860410    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:35.860422    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:35.865311    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:35.865319    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:35.900145    5132 logs.go:123] Gathering logs for coredns [f86fa2ef9959] ...
	I1209 16:52:35.900156    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86fa2ef9959"
	I1209 16:52:35.912850    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:35.912865    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:35.938641    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:52:35.938655    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:52:35.950764    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:52:35.950777    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:52:35.962850    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:35.962865    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:52:35.996330    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:52:35.996427    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:52:35.997560    5132 logs.go:123] Gathering logs for coredns [7374e35c93c9] ...
	I1209 16:52:35.997565    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7374e35c93c9"
	I1209 16:52:36.009382    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:52:36.009391    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:52:36.024168    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:52:36.024178    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:52:36.049534    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:52:36.049543    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:52:36.060830    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:52:36.060839    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:52:36.060869    5132 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 16:52:36.060876    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:52:36.060878    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:52:36.060882    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:52:36.060884    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:52:46.064979    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:51.065510    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:51.065700    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:51.079665    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:52:51.079756    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:51.090875    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:52:51.090951    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:51.101963    5132 logs.go:282] 4 containers: [7374e35c93c9 f86fa2ef9959 02d8d43dfbea 64247a147667]
	I1209 16:52:51.102040    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:51.112414    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:52:51.112489    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:51.126517    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:52:51.126609    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:51.145518    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:52:51.145596    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:51.155834    5132 logs.go:282] 0 containers: []
	W1209 16:52:51.155850    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:51.155911    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:51.166570    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:52:51.166587    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:51.166593    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:51.171733    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:52:51.171740    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:52:51.182967    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:52:51.182979    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:52:51.195139    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:51.195149    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:51.220301    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:51.220308    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:51.257356    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:52:51.257368    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:52:51.271804    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:52:51.271816    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:52:51.292970    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:52:51.292984    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:52:51.304390    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:52:51.304401    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:51.317140    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:51.317149    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:52:51.351147    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:52:51.351243    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:52:51.352371    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:52:51.352376    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:52:51.366786    5132 logs.go:123] Gathering logs for coredns [7374e35c93c9] ...
	I1209 16:52:51.366797    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7374e35c93c9"
	I1209 16:52:51.382117    5132 logs.go:123] Gathering logs for coredns [f86fa2ef9959] ...
	I1209 16:52:51.382129    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86fa2ef9959"
	I1209 16:52:51.393908    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:52:51.393921    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:52:51.405952    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:52:51.405964    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:52:51.422655    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:52:51.422667    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:52:51.422695    5132 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 16:52:51.422713    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:52:51.422717    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:52:51.422720    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:52:51.422724    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:53:01.425223    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:06.427428    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:06.427591    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:06.440963    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:53:06.441056    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:06.452247    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:53:06.452326    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:06.463139    5132 logs.go:282] 4 containers: [7374e35c93c9 f86fa2ef9959 02d8d43dfbea 64247a147667]
	I1209 16:53:06.463222    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:06.476955    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:53:06.477031    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:06.487360    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:53:06.487438    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:06.497623    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:53:06.497699    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:06.508061    5132 logs.go:282] 0 containers: []
	W1209 16:53:06.508073    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:06.508143    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:06.518951    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:53:06.518975    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:53:06.518981    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:53:06.534157    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:53:06.534167    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:53:06.550757    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:53:06.550770    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:53:06.568178    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:06.568191    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:53:06.602651    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:53:06.602745    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:53:06.603873    5132 logs.go:123] Gathering logs for coredns [7374e35c93c9] ...
	I1209 16:53:06.603882    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7374e35c93c9"
	I1209 16:53:06.619384    5132 logs.go:123] Gathering logs for coredns [f86fa2ef9959] ...
	I1209 16:53:06.619396    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86fa2ef9959"
	I1209 16:53:06.630602    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:06.630613    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:06.655906    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:53:06.655916    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:06.669172    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:06.669185    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:06.709608    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:53:06.709619    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:53:06.723441    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:53:06.723454    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:53:06.735896    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:06.735910    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:06.740562    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:53:06.740571    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:53:06.752308    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:53:06.752317    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:53:06.772094    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:53:06.772105    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:53:06.793175    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:53:06.793187    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:53:06.793214    5132 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 16:53:06.793219    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:53:06.793223    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:53:06.793227    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:53:06.793230    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:53:16.796573    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:21.797066    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:21.797320    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:21.835380    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:53:21.835487    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:21.849828    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:53:21.849898    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:21.866511    5132 logs.go:282] 4 containers: [7374e35c93c9 f86fa2ef9959 02d8d43dfbea 64247a147667]
	I1209 16:53:21.866593    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:21.876956    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:53:21.877037    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:21.887661    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:53:21.887726    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:21.899187    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:53:21.899261    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:21.909957    5132 logs.go:282] 0 containers: []
	W1209 16:53:21.909969    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:21.910037    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:21.920744    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:53:21.920766    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:21.920771    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:53:21.954772    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:53:21.954865    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:53:21.955923    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:21.955929    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:21.960283    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:53:21.960292    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:53:21.976613    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:21.976622    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:22.013583    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:53:22.013593    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:53:22.028433    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:53:22.028446    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:53:22.040055    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:53:22.040065    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:53:22.053111    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:53:22.053123    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:53:22.071071    5132 logs.go:123] Gathering logs for coredns [7374e35c93c9] ...
	I1209 16:53:22.071080    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7374e35c93c9"
	I1209 16:53:22.082888    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:53:22.082897    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:53:22.097556    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:22.097565    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:22.123464    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:53:22.123472    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:22.135319    5132 logs.go:123] Gathering logs for coredns [f86fa2ef9959] ...
	I1209 16:53:22.135329    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86fa2ef9959"
	I1209 16:53:22.147809    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:53:22.147820    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:53:22.159790    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:53:22.159801    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:53:22.174894    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:53:22.174904    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:53:22.174931    5132 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 16:53:22.174963    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:53:22.174969    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:53:22.174973    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:53:22.174980    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:53:32.183091    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:37.186894    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:37.187039    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:37.202023    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:53:37.202110    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:37.212933    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:53:37.213016    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:37.223501    5132 logs.go:282] 4 containers: [7374e35c93c9 f86fa2ef9959 02d8d43dfbea 64247a147667]
	I1209 16:53:37.223590    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:37.241996    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:53:37.242072    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:37.252421    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:53:37.252495    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:37.263037    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:53:37.263111    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:37.273358    5132 logs.go:282] 0 containers: []
	W1209 16:53:37.273369    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:37.273436    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:37.284656    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:53:37.284675    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:37.284681    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:37.289428    5132 logs.go:123] Gathering logs for coredns [7374e35c93c9] ...
	I1209 16:53:37.289435    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7374e35c93c9"
	I1209 16:53:37.301394    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:53:37.301404    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:53:37.316340    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:53:37.316351    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:53:37.328398    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:37.328411    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:37.362468    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:53:37.362481    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:53:37.376766    5132 logs.go:123] Gathering logs for coredns [f86fa2ef9959] ...
	I1209 16:53:37.376777    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86fa2ef9959"
	I1209 16:53:37.388742    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:53:37.388754    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:37.400818    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:37.400829    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:53:37.435079    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:53:37.435174    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:53:37.436301    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:53:37.436307    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:53:37.451403    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:53:37.451412    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:53:37.463628    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:53:37.463638    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:53:37.481383    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:37.481393    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:37.506527    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:53:37.506538    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:53:37.524639    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:53:37.524651    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:53:37.537884    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:53:37.537897    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:53:37.537922    5132 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 16:53:37.537927    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:53:37.537930    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:53:37.537943    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:53:37.537948    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:53:47.544035    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:52.546908    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:52.547289    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:52.593457    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:53:52.593600    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:52.612465    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:53:52.612555    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:52.626416    5132 logs.go:282] 4 containers: [7374e35c93c9 f86fa2ef9959 02d8d43dfbea 64247a147667]
	I1209 16:53:52.626502    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:52.638790    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:53:52.638864    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:52.652303    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:53:52.652373    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:52.663298    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:53:52.663373    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:52.673973    5132 logs.go:282] 0 containers: []
	W1209 16:53:52.673982    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:52.674041    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:52.684910    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:53:52.684926    5132 logs.go:123] Gathering logs for coredns [7374e35c93c9] ...
	I1209 16:53:52.684931    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7374e35c93c9"
	I1209 16:53:52.698068    5132 logs.go:123] Gathering logs for coredns [f86fa2ef9959] ...
	I1209 16:53:52.698079    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86fa2ef9959"
	I1209 16:53:52.710140    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:53:52.710154    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:53:52.727974    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:52.727987    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:52.732404    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:53:52.732412    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:53:52.748011    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:52.748022    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:52.772847    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:53:52.772857    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:52.785991    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:52.786001    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:53:52.819629    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:53:52.819722    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:53:52.820817    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:53:52.820825    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:53:52.832594    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:53:52.832605    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:53:52.844692    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:53:52.844704    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:53:52.856917    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:52.856928    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:52.892291    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:53:52.892304    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:53:52.906537    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:53:52.906550    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:53:52.921239    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:53:52.921250    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:53:52.938963    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:53:52.938972    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:53:52.938999    5132 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 16:53:52.939003    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:53:52.939006    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:53:52.939009    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:53:52.939025    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:54:02.943857    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:07.946381    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:07.946594    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:54:07.973848    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:54:07.973991    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:54:07.991818    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:54:07.991912    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:54:08.005859    5132 logs.go:282] 4 containers: [7374e35c93c9 f86fa2ef9959 02d8d43dfbea 64247a147667]
	I1209 16:54:08.005944    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:54:08.017174    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:54:08.017245    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:54:08.027878    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:54:08.027969    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:54:08.043360    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:54:08.043441    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:54:08.065054    5132 logs.go:282] 0 containers: []
	W1209 16:54:08.065071    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:54:08.065135    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:54:08.077681    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:54:08.077713    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:54:08.077721    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:54:08.093259    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:54:08.093272    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:54:08.111029    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:54:08.111039    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:54:08.123128    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:54:08.123141    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:54:08.159674    5132 logs.go:123] Gathering logs for coredns [f86fa2ef9959] ...
	I1209 16:54:08.159686    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86fa2ef9959"
	I1209 16:54:08.180493    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:54:08.180505    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:54:08.193618    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:54:08.193630    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:54:08.205391    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:54:08.205401    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:54:08.239100    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:54:08.239191    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:54:08.240249    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:54:08.240254    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:54:08.244797    5132 logs.go:123] Gathering logs for coredns [7374e35c93c9] ...
	I1209 16:54:08.244806    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7374e35c93c9"
	I1209 16:54:08.261201    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:54:08.261212    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:54:08.285941    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:54:08.285950    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:54:08.298311    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:54:08.298324    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:54:08.310014    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:54:08.310025    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:54:08.325276    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:54:08.325286    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:54:08.339734    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:54:08.339746    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:54:08.339771    5132 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1209 16:54:08.339776    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:54:08.339783    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	  Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:54:08.339787    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:54:08.339790    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:54:18.343351    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:23.345819    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:23.350461    5132 out.go:201] 
	W1209 16:54:23.354355    5132 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1209 16:54:23.354367    5132 out.go:270] * 
	* 
	W1209 16:54:23.355241    5132 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:54:23.368302    5132 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-688000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-12-09 16:54:23.451729 -0800 PST m=+4295.637326543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-688000 -n running-upgrade-688000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-688000 -n running-upgrade-688000: exit status 2 (15.655506583s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-688000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-795000          | force-systemd-flag-795000 | jenkins | v1.34.0 | 09 Dec 24 16:44 PST |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-355000              | force-systemd-env-355000  | jenkins | v1.34.0 | 09 Dec 24 16:44 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-355000           | force-systemd-env-355000  | jenkins | v1.34.0 | 09 Dec 24 16:44 PST | 09 Dec 24 16:44 PST |
	| start   | -p docker-flags-549000                | docker-flags-549000       | jenkins | v1.34.0 | 09 Dec 24 16:44 PST |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-795000             | force-systemd-flag-795000 | jenkins | v1.34.0 | 09 Dec 24 16:44 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-795000          | force-systemd-flag-795000 | jenkins | v1.34.0 | 09 Dec 24 16:44 PST | 09 Dec 24 16:44 PST |
	| start   | -p cert-expiration-966000             | cert-expiration-966000    | jenkins | v1.34.0 | 09 Dec 24 16:44 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-549000 ssh               | docker-flags-549000       | jenkins | v1.34.0 | 09 Dec 24 16:44 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-549000 ssh               | docker-flags-549000       | jenkins | v1.34.0 | 09 Dec 24 16:44 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-549000                | docker-flags-549000       | jenkins | v1.34.0 | 09 Dec 24 16:44 PST | 09 Dec 24 16:44 PST |
	| start   | -p cert-options-274000                | cert-options-274000       | jenkins | v1.34.0 | 09 Dec 24 16:44 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-274000 ssh               | cert-options-274000       | jenkins | v1.34.0 | 09 Dec 24 16:44 PST |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-274000 -- sudo        | cert-options-274000       | jenkins | v1.34.0 | 09 Dec 24 16:44 PST |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-274000                | cert-options-274000       | jenkins | v1.34.0 | 09 Dec 24 16:44 PST | 09 Dec 24 16:44 PST |
	| start   | -p running-upgrade-688000             | minikube                  | jenkins | v1.26.0 | 09 Dec 24 16:44 PST | 09 Dec 24 16:45 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-688000             | running-upgrade-688000    | jenkins | v1.34.0 | 09 Dec 24 16:45 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-966000             | cert-expiration-966000    | jenkins | v1.34.0 | 09 Dec 24 16:47 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-966000             | cert-expiration-966000    | jenkins | v1.34.0 | 09 Dec 24 16:47 PST | 09 Dec 24 16:47 PST |
	| start   | -p kubernetes-upgrade-418000          | kubernetes-upgrade-418000 | jenkins | v1.34.0 | 09 Dec 24 16:47 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-418000          | kubernetes-upgrade-418000 | jenkins | v1.34.0 | 09 Dec 24 16:48 PST | 09 Dec 24 16:48 PST |
	| start   | -p kubernetes-upgrade-418000          | kubernetes-upgrade-418000 | jenkins | v1.34.0 | 09 Dec 24 16:48 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-418000          | kubernetes-upgrade-418000 | jenkins | v1.34.0 | 09 Dec 24 16:48 PST | 09 Dec 24 16:48 PST |
	| start   | -p stopped-upgrade-632000             | minikube                  | jenkins | v1.26.0 | 09 Dec 24 16:48 PST | 09 Dec 24 16:48 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-632000 stop           | minikube                  | jenkins | v1.26.0 | 09 Dec 24 16:48 PST | 09 Dec 24 16:49 PST |
	| start   | -p stopped-upgrade-632000             | stopped-upgrade-632000    | jenkins | v1.34.0 | 09 Dec 24 16:49 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 16:49:05
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 16:49:05.860785    5393 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:49:05.860964    5393 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:49:05.860968    5393 out.go:358] Setting ErrFile to fd 2...
	I1209 16:49:05.860970    5393 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:49:05.861109    5393 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:49:05.862322    5393 out.go:352] Setting JSON to false
	I1209 16:49:05.882425    5393 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4715,"bootTime":1733787030,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:49:05.882501    5393 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:49:05.886882    5393 out.go:177] * [stopped-upgrade-632000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:49:05.894778    5393 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:49:05.894824    5393 notify.go:220] Checking for updates...
	I1209 16:49:05.902768    5393 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:49:05.906768    5393 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:49:05.910776    5393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:49:05.913775    5393 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:49:05.916749    5393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:49:05.920174    5393 config.go:182] Loaded profile config "stopped-upgrade-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:49:05.923763    5393 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 16:49:05.926819    5393 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:49:05.929748    5393 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 16:49:05.936789    5393 start.go:297] selected driver: qemu2
	I1209 16:49:05.936795    5393 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:65214 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 16:49:05.936841    5393 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:49:05.939597    5393 cni.go:84] Creating CNI manager for ""
	I1209 16:49:05.939629    5393 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:49:05.939661    5393 start.go:340] cluster config:
	{Name:stopped-upgrade-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:65214 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 16:49:05.939711    5393 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:49:05.948747    5393 out.go:177] * Starting "stopped-upgrade-632000" primary control-plane node in "stopped-upgrade-632000" cluster
	I1209 16:49:05.952743    5393 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1209 16:49:05.952761    5393 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1209 16:49:05.952773    5393 cache.go:56] Caching tarball of preloaded images
	I1209 16:49:05.952848    5393 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:49:05.952857    5393 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1209 16:49:05.952912    5393 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/config.json ...
	I1209 16:49:05.953291    5393 start.go:360] acquireMachinesLock for stopped-upgrade-632000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:49:05.953331    5393 start.go:364] duration metric: took 32.75µs to acquireMachinesLock for "stopped-upgrade-632000"
	I1209 16:49:05.953344    5393 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:49:05.953349    5393 fix.go:54] fixHost starting: 
	I1209 16:49:05.953456    5393 fix.go:112] recreateIfNeeded on stopped-upgrade-632000: state=Stopped err=<nil>
	W1209 16:49:05.953464    5393 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:49:05.957823    5393 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-632000" ...
	I1209 16:49:02.568399    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:05.965760    5393 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:49:05.965847    5393 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/qemu.pid -nic user,model=virtio,hostfwd=tcp::65179-:22,hostfwd=tcp::65180-:2376,hostname=stopped-upgrade-632000 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/disk.qcow2
	I1209 16:49:06.014528    5393 main.go:141] libmachine: STDOUT: 
	I1209 16:49:06.014561    5393 main.go:141] libmachine: STDERR: 
	I1209 16:49:06.014569    5393 main.go:141] libmachine: Waiting for VM to start (ssh -p 65179 docker@127.0.0.1)...
	I1209 16:49:07.571323    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:07.572077    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:49:07.620356    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:49:07.620507    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:49:07.639294    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:49:07.639397    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:49:07.653380    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:49:07.653460    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:49:07.665494    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:49:07.665581    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:49:07.676750    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:49:07.676823    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:49:07.687507    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:49:07.687581    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:49:07.697547    5132 logs.go:282] 0 containers: []
	W1209 16:49:07.697558    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:49:07.697616    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:49:07.714758    5132 logs.go:282] 0 containers: []
	W1209 16:49:07.714771    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:49:07.714782    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:49:07.714788    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:49:07.753055    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:49:07.753064    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:49:07.796353    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:49:07.796365    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:49:07.810930    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:49:07.810943    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:49:07.822655    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:49:07.822667    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:49:07.840455    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:49:07.840468    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:49:07.845109    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:49:07.845118    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:49:07.858617    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:49:07.858628    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:49:07.874276    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:49:07.874290    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:49:07.888674    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:49:07.888687    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:49:07.905361    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:49:07.905374    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:49:07.920214    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:49:07.920226    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:49:07.931502    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:49:07.931514    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:49:07.943222    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:49:07.943233    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:49:07.954717    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:49:07.954730    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:49:10.481944    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:15.484662    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:15.484785    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:49:15.496209    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:49:15.496286    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:49:15.507747    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:49:15.507815    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:49:15.518817    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:49:15.518893    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:49:15.529725    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:49:15.529807    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:49:15.540447    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:49:15.540522    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:49:15.553825    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:49:15.553900    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:49:15.564941    5132 logs.go:282] 0 containers: []
	W1209 16:49:15.564953    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:49:15.565018    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:49:15.575012    5132 logs.go:282] 0 containers: []
	W1209 16:49:15.575029    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:49:15.575037    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:49:15.575043    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:49:15.613760    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:49:15.613769    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:49:15.627679    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:49:15.627691    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:49:15.640848    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:49:15.640859    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:49:15.654218    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:49:15.654228    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:49:15.658888    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:49:15.658900    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:49:15.697991    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:49:15.698006    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:49:15.715956    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:49:15.715967    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:49:15.729712    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:49:15.729726    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:49:15.745048    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:49:15.745061    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:49:15.757270    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:49:15.757279    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:49:15.781006    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:49:15.781012    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:49:15.792918    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:49:15.792928    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:49:15.807378    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:49:15.807391    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:49:15.819734    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:49:15.819745    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:49:18.332830    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:23.335004    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:23.335257    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:49:23.358677    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:49:23.358805    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:49:23.372208    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:49:23.372283    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:49:23.383783    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:49:23.383862    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:49:23.394076    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:49:23.394144    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:49:23.404224    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:49:23.404305    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:49:23.414687    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:49:23.414755    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:49:23.424838    5132 logs.go:282] 0 containers: []
	W1209 16:49:23.424847    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:49:23.424905    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:49:23.434890    5132 logs.go:282] 0 containers: []
	W1209 16:49:23.434902    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:49:23.434911    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:49:23.434918    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:49:23.472567    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:49:23.472575    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:49:23.507907    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:49:23.507922    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:49:23.524047    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:49:23.524061    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:49:23.535366    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:49:23.535377    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:49:23.551240    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:49:23.551254    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:49:23.565561    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:49:23.565572    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:49:23.588455    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:49:23.588461    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:49:23.604506    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:49:23.604519    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:49:23.616085    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:49:23.616096    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:49:23.634481    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:49:23.634490    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:49:23.639206    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:49:23.639213    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:49:23.654006    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:49:23.654018    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:49:23.667429    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:49:23.667442    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:49:23.679042    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:49:23.679055    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:49:26.193441    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:26.870471    5393 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/config.json ...
	I1209 16:49:26.871245    5393 machine.go:93] provisionDockerMachine start ...
	I1209 16:49:26.871448    5393 main.go:141] libmachine: Using SSH client type: native
	I1209 16:49:26.871925    5393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c72fc0] 0x104c75800 <nil>  [] 0s} localhost 65179 <nil> <nil>}
	I1209 16:49:26.871940    5393 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 16:49:26.963189    5393 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 16:49:26.963224    5393 buildroot.go:166] provisioning hostname "stopped-upgrade-632000"
	I1209 16:49:26.963360    5393 main.go:141] libmachine: Using SSH client type: native
	I1209 16:49:26.963581    5393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c72fc0] 0x104c75800 <nil>  [] 0s} localhost 65179 <nil> <nil>}
	I1209 16:49:26.963595    5393 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-632000 && echo "stopped-upgrade-632000" | sudo tee /etc/hostname
	I1209 16:49:27.052748    5393 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-632000
	
	I1209 16:49:27.052842    5393 main.go:141] libmachine: Using SSH client type: native
	I1209 16:49:27.052988    5393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c72fc0] 0x104c75800 <nil>  [] 0s} localhost 65179 <nil> <nil>}
	I1209 16:49:27.053000    5393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-632000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-632000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-632000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 16:49:27.131946    5393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 16:49:27.131963    5393 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20062-1231/.minikube CaCertPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20062-1231/.minikube}
	I1209 16:49:27.131972    5393 buildroot.go:174] setting up certificates
	I1209 16:49:27.131977    5393 provision.go:84] configureAuth start
	I1209 16:49:27.131982    5393 provision.go:143] copyHostCerts
	I1209 16:49:27.132052    5393 exec_runner.go:144] found /Users/jenkins/minikube-integration/20062-1231/.minikube/cert.pem, removing ...
	I1209 16:49:27.132061    5393 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20062-1231/.minikube/cert.pem
	I1209 16:49:27.132176    5393 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20062-1231/.minikube/cert.pem (1123 bytes)
	I1209 16:49:27.132413    5393 exec_runner.go:144] found /Users/jenkins/minikube-integration/20062-1231/.minikube/key.pem, removing ...
	I1209 16:49:27.132417    5393 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20062-1231/.minikube/key.pem
	I1209 16:49:27.132470    5393 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20062-1231/.minikube/key.pem (1675 bytes)
	I1209 16:49:27.132613    5393 exec_runner.go:144] found /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.pem, removing ...
	I1209 16:49:27.132616    5393 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.pem
	I1209 16:49:27.132660    5393 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.pem (1082 bytes)
	I1209 16:49:27.132791    5393 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-632000 san=[127.0.0.1 localhost minikube stopped-upgrade-632000]
	I1209 16:49:27.330833    5393 provision.go:177] copyRemoteCerts
	I1209 16:49:27.330902    5393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 16:49:27.330911    5393 sshutil.go:53] new ssh client: &{IP:localhost Port:65179 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/id_rsa Username:docker}
	I1209 16:49:27.369834    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 16:49:27.376896    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1209 16:49:27.383666    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 16:49:27.390832    5393 provision.go:87] duration metric: took 258.847542ms to configureAuth
	I1209 16:49:27.390842    5393 buildroot.go:189] setting minikube options for container-runtime
	I1209 16:49:27.390949    5393 config.go:182] Loaded profile config "stopped-upgrade-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:49:27.390999    5393 main.go:141] libmachine: Using SSH client type: native
	I1209 16:49:27.391097    5393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c72fc0] 0x104c75800 <nil>  [] 0s} localhost 65179 <nil> <nil>}
	I1209 16:49:27.391102    5393 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1209 16:49:27.462336    5393 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1209 16:49:27.462344    5393 buildroot.go:70] root file system type: tmpfs
	I1209 16:49:27.462402    5393 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1209 16:49:27.462458    5393 main.go:141] libmachine: Using SSH client type: native
	I1209 16:49:27.462564    5393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c72fc0] 0x104c75800 <nil>  [] 0s} localhost 65179 <nil> <nil>}
	I1209 16:49:27.462601    5393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1209 16:49:27.537875    5393 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1209 16:49:27.537936    5393 main.go:141] libmachine: Using SSH client type: native
	I1209 16:49:27.538044    5393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c72fc0] 0x104c75800 <nil>  [] 0s} localhost 65179 <nil> <nil>}
	I1209 16:49:27.538053    5393 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1209 16:49:27.926148    5393 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1209 16:49:27.926162    5393 machine.go:96] duration metric: took 1.05491125s to provisionDockerMachine
	I1209 16:49:27.926169    5393 start.go:293] postStartSetup for "stopped-upgrade-632000" (driver="qemu2")
	I1209 16:49:27.926176    5393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 16:49:27.926249    5393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 16:49:27.926258    5393 sshutil.go:53] new ssh client: &{IP:localhost Port:65179 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/id_rsa Username:docker}
	I1209 16:49:27.968299    5393 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 16:49:27.969564    5393 info.go:137] Remote host: Buildroot 2021.02.12
	I1209 16:49:27.969572    5393 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20062-1231/.minikube/addons for local assets ...
	I1209 16:49:27.969640    5393 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20062-1231/.minikube/files for local assets ...
	I1209 16:49:27.969733    5393 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20062-1231/.minikube/files/etc/ssl/certs/17422.pem -> 17422.pem in /etc/ssl/certs
	I1209 16:49:27.969841    5393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 16:49:27.972344    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/files/etc/ssl/certs/17422.pem --> /etc/ssl/certs/17422.pem (1708 bytes)
	I1209 16:49:27.978855    5393 start.go:296] duration metric: took 52.681792ms for postStartSetup
	I1209 16:49:27.978868    5393 fix.go:56] duration metric: took 22.025602791s for fixHost
	I1209 16:49:27.978903    5393 main.go:141] libmachine: Using SSH client type: native
	I1209 16:49:27.978993    5393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c72fc0] 0x104c75800 <nil>  [] 0s} localhost 65179 <nil> <nil>}
	I1209 16:49:27.978998    5393 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 16:49:28.049658    5393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733791768.267420880
	
	I1209 16:49:28.049668    5393 fix.go:216] guest clock: 1733791768.267420880
	I1209 16:49:28.049672    5393 fix.go:229] Guest: 2024-12-09 16:49:28.26742088 -0800 PST Remote: 2024-12-09 16:49:27.97887 -0800 PST m=+22.148310668 (delta=288.55088ms)
	I1209 16:49:28.049684    5393 fix.go:200] guest clock delta is within tolerance: 288.55088ms
	I1209 16:49:28.049687    5393 start.go:83] releasing machines lock for "stopped-upgrade-632000", held for 22.096433333s
	I1209 16:49:28.049756    5393 ssh_runner.go:195] Run: cat /version.json
	I1209 16:49:28.049761    5393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 16:49:28.049765    5393 sshutil.go:53] new ssh client: &{IP:localhost Port:65179 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/id_rsa Username:docker}
	I1209 16:49:28.049779    5393 sshutil.go:53] new ssh client: &{IP:localhost Port:65179 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/id_rsa Username:docker}
	W1209 16:49:28.050283    5393 sshutil.go:64] dial failure (will retry): dial tcp [::1]:65179: connect: connection refused
	I1209 16:49:28.050306    5393 retry.go:31] will retry after 125.446271ms: dial tcp [::1]:65179: connect: connection refused
	W1209 16:49:28.216327    5393 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1209 16:49:28.216389    5393 ssh_runner.go:195] Run: systemctl --version
	I1209 16:49:28.218511    5393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 16:49:28.220521    5393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 16:49:28.220569    5393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1209 16:49:28.223791    5393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1209 16:49:28.228921    5393 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 16:49:28.228931    5393 start.go:495] detecting cgroup driver to use...
	I1209 16:49:28.229029    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 16:49:28.237386    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1209 16:49:28.240475    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 16:49:28.243463    5393 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 16:49:28.243500    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 16:49:28.246878    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 16:49:28.250252    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 16:49:28.253557    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 16:49:28.256909    5393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 16:49:28.259677    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 16:49:28.262832    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 16:49:28.266184    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 16:49:28.269600    5393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 16:49:28.272241    5393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 16:49:28.274994    5393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:49:28.354437    5393 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 16:49:28.360963    5393 start.go:495] detecting cgroup driver to use...
	I1209 16:49:28.361046    5393 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1209 16:49:28.370078    5393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 16:49:28.375539    5393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 16:49:28.385226    5393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 16:49:28.389663    5393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 16:49:28.393942    5393 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 16:49:28.440018    5393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 16:49:28.445021    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 16:49:28.450283    5393 ssh_runner.go:195] Run: which cri-dockerd
	I1209 16:49:28.451531    5393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1209 16:49:28.454665    5393 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1209 16:49:28.459831    5393 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1209 16:49:28.538896    5393 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1209 16:49:28.620514    5393 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1209 16:49:28.620577    5393 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1209 16:49:28.626164    5393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:49:28.706990    5393 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1209 16:49:29.862034    5393 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.155027583s)
	I1209 16:49:29.862111    5393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1209 16:49:29.867106    5393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1209 16:49:29.872104    5393 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1209 16:49:29.960611    5393 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1209 16:49:30.038907    5393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:49:30.101799    5393 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1209 16:49:30.107853    5393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1209 16:49:30.112289    5393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:49:30.192375    5393 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1209 16:49:30.237371    5393 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1209 16:49:30.237474    5393 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1209 16:49:30.239974    5393 start.go:563] Will wait 60s for crictl version
	I1209 16:49:30.240024    5393 ssh_runner.go:195] Run: which crictl
	I1209 16:49:30.241434    5393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 16:49:30.256499    5393 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1209 16:49:30.256574    5393 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1209 16:49:30.274301    5393 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1209 16:49:30.292859    5393 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1209 16:49:30.293012    5393 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1209 16:49:30.294294    5393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 16:49:30.298433    5393 kubeadm.go:883] updating cluster {Name:stopped-upgrade-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:65214 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1209 16:49:30.298480    5393 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1209 16:49:30.298527    5393 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1209 16:49:30.308990    5393 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1209 16:49:30.308999    5393 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1209 16:49:30.309058    5393 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1209 16:49:30.312128    5393 ssh_runner.go:195] Run: which lz4
	I1209 16:49:30.313369    5393 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 16:49:30.314674    5393 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 16:49:30.314690    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1209 16:49:31.194112    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:31.194228    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:49:31.205884    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:49:31.205969    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:49:31.219805    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:49:31.219892    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:49:31.231783    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:49:31.231856    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:49:31.243426    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:49:31.243507    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:49:31.255446    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:49:31.255529    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:49:31.267481    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:49:31.267561    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:49:31.283387    5132 logs.go:282] 0 containers: []
	W1209 16:49:31.283401    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:49:31.283474    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:49:31.294706    5132 logs.go:282] 0 containers: []
	W1209 16:49:31.294721    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:49:31.294729    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:49:31.294740    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:49:31.335288    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:49:31.335301    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:49:31.349732    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:49:31.349751    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:49:31.362511    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:49:31.362526    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:49:31.390974    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:49:31.390994    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:49:31.428699    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:49:31.428718    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:49:31.447159    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:49:31.447177    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:49:31.460105    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:49:31.460117    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:49:31.481561    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:49:31.481579    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:49:31.494830    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:49:31.494841    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:49:31.509976    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:49:31.509989    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:49:31.514614    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:49:31.514625    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:49:31.530300    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:49:31.530317    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:49:31.543355    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:49:31.543367    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:49:31.562422    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:49:31.562436    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:49:31.264667    5393 docker.go:653] duration metric: took 951.346167ms to copy over tarball
	I1209 16:49:31.264740    5393 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 16:49:32.450286    5393 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.185533917s)
	I1209 16:49:32.450302    5393 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 16:49:32.466173    5393 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1209 16:49:32.469512    5393 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1209 16:49:32.474719    5393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:49:32.554131    5393 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1209 16:49:33.765212    5393 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.211066458s)
	I1209 16:49:33.765327    5393 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1209 16:49:33.776232    5393 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1209 16:49:33.776241    5393 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1209 16:49:33.776248    5393 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 16:49:33.782858    5393 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:49:33.785024    5393 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 16:49:33.786617    5393 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 16:49:33.786631    5393 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:49:33.788599    5393 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 16:49:33.788617    5393 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 16:49:33.790266    5393 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 16:49:33.790268    5393 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 16:49:33.790639    5393 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 16:49:33.791614    5393 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 16:49:33.792762    5393 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 16:49:33.792787    5393 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1209 16:49:33.792819    5393 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 16:49:33.793690    5393 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1209 16:49:33.795396    5393 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1209 16:49:33.795451    5393 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1209 16:49:34.340619    5393 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1209 16:49:34.352486    5393 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1209 16:49:34.352525    5393 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 16:49:34.352596    5393 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1209 16:49:34.363400    5393 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1209 16:49:34.363475    5393 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1209 16:49:34.376679    5393 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1209 16:49:34.376716    5393 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 16:49:34.376770    5393 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1209 16:49:34.387383    5393 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1209 16:49:34.402871    5393 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	W1209 16:49:34.413747    5393 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1209 16:49:34.413925    5393 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1209 16:49:34.414089    5393 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1209 16:49:34.414112    5393 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 16:49:34.414143    5393 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 16:49:34.424903    5393 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1209 16:49:34.424929    5393 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 16:49:34.425013    5393 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1209 16:49:34.425196    5393 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1209 16:49:34.435529    5393 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1209 16:49:34.435656    5393 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1209 16:49:34.437301    5393 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1209 16:49:34.437317    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1209 16:49:34.481287    5393 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1209 16:49:34.481304    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1209 16:49:34.519379    5393 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1209 16:49:34.536944    5393 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1209 16:49:34.547009    5393 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1209 16:49:34.547036    5393 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 16:49:34.547104    5393 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1209 16:49:34.557486    5393 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1209 16:49:34.574796    5393 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1209 16:49:34.585201    5393 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1209 16:49:34.585224    5393 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1209 16:49:34.585283    5393 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1209 16:49:34.595219    5393 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1209 16:49:34.595369    5393 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1209 16:49:34.596842    5393 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1209 16:49:34.596852    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1209 16:49:34.661522    5393 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1209 16:49:34.688018    5393 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1209 16:49:34.688046    5393 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1209 16:49:34.688110    5393 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1209 16:49:34.725746    5393 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1209 16:49:34.725889    5393 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1209 16:49:34.738836    5393 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1209 16:49:34.738874    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W1209 16:49:34.747747    5393 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1209 16:49:34.747873    5393 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:49:34.774400    5393 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1209 16:49:34.774414    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1209 16:49:34.778347    5393 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1209 16:49:34.778372    5393 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:49:34.778438    5393 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:49:34.840502    5393 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1209 16:49:34.840554    5393 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1209 16:49:34.840699    5393 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1209 16:49:34.852794    5393 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1209 16:49:34.852810    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1209 16:49:34.858557    5393 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1209 16:49:34.858570    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1209 16:49:35.015291    5393 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1209 16:49:35.015321    5393 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1209 16:49:35.015330    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1209 16:49:35.253364    5393 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1209 16:49:35.253404    5393 cache_images.go:92] duration metric: took 1.477154125s to LoadCachedImages
	W1209 16:49:35.253447    5393 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1209 16:49:35.253453    5393 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1209 16:49:35.253516    5393 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-632000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 16:49:35.253586    5393 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1209 16:49:35.266557    5393 cni.go:84] Creating CNI manager for ""
	I1209 16:49:35.266575    5393 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:49:35.266584    5393 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 16:49:35.266596    5393 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-632000 NodeName:stopped-upgrade-632000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 16:49:35.266674    5393 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-632000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 16:49:35.266744    5393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1209 16:49:35.270181    5393 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 16:49:35.270230    5393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 16:49:35.273167    5393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1209 16:49:35.278287    5393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 16:49:35.283520    5393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1209 16:49:35.289254    5393 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1209 16:49:35.290634    5393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 16:49:35.294480    5393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:49:35.369763    5393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 16:49:35.376844    5393 certs.go:68] Setting up /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000 for IP: 10.0.2.15
	I1209 16:49:35.376853    5393 certs.go:194] generating shared ca certs ...
	I1209 16:49:35.376861    5393 certs.go:226] acquiring lock for ca certs: {Name:mk94909c12771095ef5e42af3f5ec988b0b9c452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:49:35.377039    5393 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.key
	I1209 16:49:35.377797    5393 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/proxy-client-ca.key
	I1209 16:49:35.377807    5393 certs.go:256] generating profile certs ...
	I1209 16:49:35.378158    5393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/client.key
	I1209 16:49:35.378180    5393 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.key.e690b03c
	I1209 16:49:35.378190    5393 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.crt.e690b03c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1209 16:49:35.516829    5393 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.crt.e690b03c ...
	I1209 16:49:35.516846    5393 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.crt.e690b03c: {Name:mk3830187f4b2ffcd1438f36ba321e42de5b5fd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:49:35.517463    5393 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.key.e690b03c ...
	I1209 16:49:35.517469    5393 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.key.e690b03c: {Name:mk27d3b39e4c4496cced2852ebed17b4619826bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:49:35.517659    5393 certs.go:381] copying /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.crt.e690b03c -> /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.crt
	I1209 16:49:35.517800    5393 certs.go:385] copying /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.key.e690b03c -> /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.key
	I1209 16:49:35.518151    5393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/proxy-client.key
	I1209 16:49:35.518336    5393 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/1742.pem (1338 bytes)
	W1209 16:49:35.518570    5393 certs.go:480] ignoring /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/1742_empty.pem, impossibly tiny 0 bytes
	I1209 16:49:35.518577    5393 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 16:49:35.518604    5393 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem (1082 bytes)
	I1209 16:49:35.518623    5393 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem (1123 bytes)
	I1209 16:49:35.518654    5393 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/key.pem (1675 bytes)
	I1209 16:49:35.518692    5393 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/files/etc/ssl/certs/17422.pem (1708 bytes)
	I1209 16:49:35.519061    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 16:49:35.526156    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 16:49:35.533635    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 16:49:35.541103    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 16:49:35.547493    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 16:49:35.554236    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 16:49:35.561060    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 16:49:35.568360    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 16:49:35.574837    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 16:49:35.581490    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/1742.pem --> /usr/share/ca-certificates/1742.pem (1338 bytes)
	I1209 16:49:35.588181    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/files/etc/ssl/certs/17422.pem --> /usr/share/ca-certificates/17422.pem (1708 bytes)
	I1209 16:49:35.594845    5393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 16:49:35.600065    5393 ssh_runner.go:195] Run: openssl version
	I1209 16:49:35.601929    5393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 16:49:35.604918    5393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 16:49:35.606336    5393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:43 /usr/share/ca-certificates/minikubeCA.pem
	I1209 16:49:35.606367    5393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 16:49:35.608134    5393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 16:49:35.611117    5393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1742.pem && ln -fs /usr/share/ca-certificates/1742.pem /etc/ssl/certs/1742.pem"
	I1209 16:49:35.614344    5393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1742.pem
	I1209 16:49:35.615658    5393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:51 /usr/share/ca-certificates/1742.pem
	I1209 16:49:35.615686    5393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1742.pem
	I1209 16:49:35.617470    5393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1742.pem /etc/ssl/certs/51391683.0"
	I1209 16:49:35.620276    5393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17422.pem && ln -fs /usr/share/ca-certificates/17422.pem /etc/ssl/certs/17422.pem"
	I1209 16:49:35.623159    5393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17422.pem
	I1209 16:49:35.624517    5393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:51 /usr/share/ca-certificates/17422.pem
	I1209 16:49:35.624541    5393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17422.pem
	I1209 16:49:35.626208    5393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17422.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 16:49:35.629275    5393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 16:49:35.630767    5393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 16:49:35.633205    5393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 16:49:35.635156    5393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 16:49:35.637091    5393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 16:49:35.638789    5393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 16:49:35.640750    5393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 16:49:35.642535    5393 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:65214 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 16:49:35.642615    5393 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1209 16:49:35.652246    5393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 16:49:35.655435    5393 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 16:49:35.655442    5393 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 16:49:35.655474    5393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 16:49:35.658811    5393 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 16:49:35.659110    5393 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-632000" does not appear in /Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:49:35.659213    5393 kubeconfig.go:62] /Users/jenkins/minikube-integration/20062-1231/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-632000" cluster setting kubeconfig missing "stopped-upgrade-632000" context setting]
	I1209 16:49:35.659407    5393 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/kubeconfig: {Name:mk5092322010dd3bee2f23e3f2812067ca57270f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:49:35.659821    5393 kapi.go:59] client config for stopped-upgrade-632000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/client.key", CAFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1066cf740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 16:49:35.660311    5393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 16:49:35.662912    5393 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-632000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1209 16:49:35.662920    5393 kubeadm.go:1160] stopping kube-system containers ...
	I1209 16:49:35.662964    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1209 16:49:35.673854    5393 docker.go:483] Stopping containers: [040bb0a9f533 5c9cfb9c3cc2 54ad1b7454b7 6dca5a28bb4e 6b7e5d2fd21a ee48870f525d 61a24778c716 260d1e1d7b2e]
	I1209 16:49:35.673923    5393 ssh_runner.go:195] Run: docker stop 040bb0a9f533 5c9cfb9c3cc2 54ad1b7454b7 6dca5a28bb4e 6b7e5d2fd21a ee48870f525d 61a24778c716 260d1e1d7b2e
	I1209 16:49:35.684847    5393 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 16:49:35.690497    5393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 16:49:35.693691    5393 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 16:49:35.693696    5393 kubeadm.go:157] found existing configuration files:
	
	I1209 16:49:35.693732    5393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/admin.conf
	I1209 16:49:35.696276    5393 kubeadm.go:163] "https://control-plane.minikube.internal:65214" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 16:49:35.696301    5393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 16:49:35.698862    5393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/kubelet.conf
	I1209 16:49:35.701833    5393 kubeadm.go:163] "https://control-plane.minikube.internal:65214" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 16:49:35.701885    5393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 16:49:35.704699    5393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/controller-manager.conf
	I1209 16:49:35.707287    5393 kubeadm.go:163] "https://control-plane.minikube.internal:65214" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 16:49:35.707318    5393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 16:49:35.710384    5393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/scheduler.conf
	I1209 16:49:35.713387    5393 kubeadm.go:163] "https://control-plane.minikube.internal:65214" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 16:49:35.713424    5393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 16:49:35.715980    5393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 16:49:35.718859    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 16:49:35.741598    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 16:49:34.078467    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:36.278882    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 16:49:36.411200    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 16:49:36.440930    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 16:49:36.460961    5393 api_server.go:52] waiting for apiserver process to appear ...
	I1209 16:49:36.461069    5393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 16:49:36.963334    5393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 16:49:37.463136    5393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 16:49:37.467465    5393 api_server.go:72] duration metric: took 1.006509916s to wait for apiserver process to appear ...
	I1209 16:49:37.467475    5393 api_server.go:88] waiting for apiserver healthz status ...
	I1209 16:49:37.467494    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:39.080768    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:39.081279    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:49:39.121181    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:49:39.121351    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:49:39.143686    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:49:39.143814    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:49:39.166728    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:49:39.166817    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:49:39.177779    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:49:39.177860    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:49:39.187979    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:49:39.188063    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:49:39.198839    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:49:39.198916    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:49:39.209425    5132 logs.go:282] 0 containers: []
	W1209 16:49:39.209436    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:49:39.209499    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:49:39.219438    5132 logs.go:282] 0 containers: []
	W1209 16:49:39.219451    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:49:39.219460    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:49:39.219466    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:49:39.224087    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:49:39.224095    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:49:39.236180    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:49:39.236193    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:49:39.248606    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:49:39.248620    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:49:39.282485    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:49:39.282501    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:49:39.296808    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:49:39.296820    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:49:39.309824    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:49:39.309835    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:49:39.324465    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:49:39.324477    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:49:39.349668    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:49:39.349683    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:49:39.362010    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:49:39.362024    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:49:39.376259    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:49:39.376275    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:49:39.388798    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:49:39.388813    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:49:39.408305    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:49:39.408321    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:49:39.445468    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:49:39.445483    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:49:39.464293    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:49:39.464308    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:49:41.977731    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:42.469706    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:42.469809    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:46.980058    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:46.980351    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:49:47.005134    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:49:47.005234    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:49:47.020844    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:49:47.020934    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:49:47.033691    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:49:47.033775    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:49:47.044876    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:49:47.044953    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:49:47.055684    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:49:47.055761    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:49:47.066098    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:49:47.066172    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:49:47.076480    5132 logs.go:282] 0 containers: []
	W1209 16:49:47.076491    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:49:47.076551    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:49:47.089506    5132 logs.go:282] 0 containers: []
	W1209 16:49:47.089517    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:49:47.089526    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:49:47.089532    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:49:47.126430    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:49:47.126438    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:49:47.162103    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:49:47.162114    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:49:47.175046    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:49:47.175057    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:49:47.187155    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:49:47.187169    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:49:47.201705    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:49:47.201719    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:49:47.218849    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:49:47.218859    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:49:47.237648    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:49:47.237659    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:49:47.251315    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:49:47.251330    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:49:47.262985    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:49:47.262995    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:49:47.281377    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:49:47.281391    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:49:47.285928    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:49:47.285936    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:49:47.303553    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:49:47.303563    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:49:47.321049    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:49:47.321059    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:49:47.470580    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:47.470606    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:47.343397    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:49:47.343404    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:49:49.856283    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:52.471079    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:52.471139    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:54.858546    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:54.858709    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:49:54.872891    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:49:54.872976    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:49:54.884049    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:49:54.884134    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:49:54.894876    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:49:54.894953    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:49:54.905717    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:49:54.905801    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:49:54.916624    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:49:54.916705    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:49:54.927185    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:49:54.927264    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:49:54.938164    5132 logs.go:282] 0 containers: []
	W1209 16:49:54.938176    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:49:54.938246    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:49:54.949605    5132 logs.go:282] 0 containers: []
	W1209 16:49:54.949616    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:49:54.949623    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:49:54.949630    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:49:54.955239    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:49:54.955252    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:49:54.994969    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:49:54.994988    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:49:55.010592    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:49:55.010605    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:49:55.029998    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:49:55.030014    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:49:55.042770    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:49:55.042783    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:49:55.057177    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:49:55.057190    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:49:55.073151    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:49:55.073168    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:49:55.088389    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:49:55.088411    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:49:55.111804    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:49:55.111820    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:49:55.125358    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:49:55.125374    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:49:55.138821    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:49:55.138833    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:49:55.152033    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:49:55.152048    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:49:55.193234    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:49:55.193254    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:49:55.207790    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:49:55.207803    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:49:57.471796    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:57.471878    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:57.734825    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:02.472913    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:02.472956    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:02.737306    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:02.737544    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:50:02.759712    5132 logs.go:282] 2 containers: [cf6a8f7e7994 fbb0d697e7d5]
	I1209 16:50:02.759843    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:50:02.774678    5132 logs.go:282] 2 containers: [20581501b80a 83ee3da72236]
	I1209 16:50:02.774759    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:50:02.786685    5132 logs.go:282] 1 containers: [b1f24e9a0e3f]
	I1209 16:50:02.786767    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:50:02.796836    5132 logs.go:282] 2 containers: [a975cbef233a 74824a4c15e7]
	I1209 16:50:02.796911    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:50:02.806594    5132 logs.go:282] 1 containers: [5b955ab5ef24]
	I1209 16:50:02.806660    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:50:02.817697    5132 logs.go:282] 2 containers: [b10332a50588 8cd7609e0999]
	I1209 16:50:02.817779    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:50:02.827710    5132 logs.go:282] 0 containers: []
	W1209 16:50:02.827724    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:50:02.827790    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:50:02.837737    5132 logs.go:282] 0 containers: []
	W1209 16:50:02.837750    5132 logs.go:284] No container was found matching "storage-provisioner"
	I1209 16:50:02.837759    5132 logs.go:123] Gathering logs for kube-controller-manager [8cd7609e0999] ...
	I1209 16:50:02.837765    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cd7609e0999"
	I1209 16:50:02.849003    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:50:02.849014    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:50:02.872540    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:50:02.872547    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:50:02.909152    5132 logs.go:123] Gathering logs for kube-controller-manager [b10332a50588] ...
	I1209 16:50:02.909160    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b10332a50588"
	I1209 16:50:02.926179    5132 logs.go:123] Gathering logs for kube-scheduler [74824a4c15e7] ...
	I1209 16:50:02.926194    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 74824a4c15e7"
	I1209 16:50:02.941790    5132 logs.go:123] Gathering logs for kube-apiserver [cf6a8f7e7994] ...
	I1209 16:50:02.941800    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf6a8f7e7994"
	I1209 16:50:02.956042    5132 logs.go:123] Gathering logs for kube-apiserver [fbb0d697e7d5] ...
	I1209 16:50:02.956052    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fbb0d697e7d5"
	I1209 16:50:02.969126    5132 logs.go:123] Gathering logs for coredns [b1f24e9a0e3f] ...
	I1209 16:50:02.969139    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1f24e9a0e3f"
	I1209 16:50:02.980371    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:50:02.980382    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:50:02.985110    5132 logs.go:123] Gathering logs for etcd [83ee3da72236] ...
	I1209 16:50:02.985118    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 83ee3da72236"
	I1209 16:50:02.998980    5132 logs.go:123] Gathering logs for kube-scheduler [a975cbef233a] ...
	I1209 16:50:02.998990    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a975cbef233a"
	I1209 16:50:03.010845    5132 logs.go:123] Gathering logs for kube-proxy [5b955ab5ef24] ...
	I1209 16:50:03.010856    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b955ab5ef24"
	I1209 16:50:03.022691    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:50:03.022701    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:50:03.035154    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:50:03.035165    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:50:03.070276    5132 logs.go:123] Gathering logs for etcd [20581501b80a] ...
	I1209 16:50:03.070288    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20581501b80a"
	I1209 16:50:05.586446    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:07.474468    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:07.474510    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:10.588711    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:10.588797    5132 kubeadm.go:597] duration metric: took 4m3.574452209s to restartPrimaryControlPlane
	W1209 16:50:10.588857    5132 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 16:50:10.588888    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1209 16:50:11.536596    5132 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 16:50:11.541693    5132 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 16:50:11.544665    5132 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 16:50:11.547613    5132 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 16:50:11.547620    5132 kubeadm.go:157] found existing configuration files:
	
	I1209 16:50:11.547652    5132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/admin.conf
	I1209 16:50:11.550401    5132 kubeadm.go:163] "https://control-plane.minikube.internal:64988" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 16:50:11.550431    5132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 16:50:11.553101    5132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/kubelet.conf
	I1209 16:50:11.556090    5132 kubeadm.go:163] "https://control-plane.minikube.internal:64988" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 16:50:11.556120    5132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 16:50:11.559164    5132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/controller-manager.conf
	I1209 16:50:11.561639    5132 kubeadm.go:163] "https://control-plane.minikube.internal:64988" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 16:50:11.561672    5132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 16:50:11.564374    5132 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/scheduler.conf
	I1209 16:50:11.567341    5132 kubeadm.go:163] "https://control-plane.minikube.internal:64988" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:64988 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 16:50:11.567370    5132 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 16:50:11.570271    5132 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 16:50:11.586299    5132 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1209 16:50:11.586328    5132 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 16:50:11.645586    5132 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 16:50:11.645648    5132 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 16:50:11.645708    5132 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 16:50:11.701707    5132 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 16:50:11.708834    5132 out.go:235]   - Generating certificates and keys ...
	I1209 16:50:11.708865    5132 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 16:50:11.708897    5132 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 16:50:11.708949    5132 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 16:50:11.708986    5132 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 16:50:11.709020    5132 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 16:50:11.709055    5132 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 16:50:11.709097    5132 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 16:50:11.709125    5132 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 16:50:11.709165    5132 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 16:50:11.709213    5132 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 16:50:11.709235    5132 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 16:50:11.709264    5132 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 16:50:11.781954    5132 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 16:50:11.886600    5132 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 16:50:12.023670    5132 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 16:50:12.143293    5132 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 16:50:12.176881    5132 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 16:50:12.177299    5132 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 16:50:12.177338    5132 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 16:50:12.265565    5132 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 16:50:12.269233    5132 out.go:235]   - Booting up control plane ...
	I1209 16:50:12.269278    5132 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 16:50:12.269311    5132 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 16:50:12.269342    5132 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 16:50:12.269377    5132 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 16:50:12.269450    5132 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 16:50:12.476018    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:12.476039    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:17.269775    5132 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.004245 seconds
	I1209 16:50:17.269890    5132 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 16:50:17.277123    5132 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 16:50:17.803713    5132 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 16:50:17.804047    5132 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-688000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 16:50:18.310462    5132 kubeadm.go:310] [bootstrap-token] Using token: x6vu92.efyexjcrt0uobog7
	I1209 16:50:18.313291    5132 out.go:235]   - Configuring RBAC rules ...
	I1209 16:50:18.313406    5132 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 16:50:18.314189    5132 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 16:50:18.320385    5132 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 16:50:18.321955    5132 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 16:50:18.323655    5132 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 16:50:18.325402    5132 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 16:50:18.331084    5132 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 16:50:18.493298    5132 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 16:50:18.716470    5132 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 16:50:18.716819    5132 kubeadm.go:310] 
	I1209 16:50:18.716855    5132 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 16:50:18.716861    5132 kubeadm.go:310] 
	I1209 16:50:18.716905    5132 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 16:50:18.716944    5132 kubeadm.go:310] 
	I1209 16:50:18.716965    5132 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 16:50:18.716997    5132 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 16:50:18.717022    5132 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 16:50:18.717025    5132 kubeadm.go:310] 
	I1209 16:50:18.717063    5132 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 16:50:18.717065    5132 kubeadm.go:310] 
	I1209 16:50:18.717092    5132 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 16:50:18.717096    5132 kubeadm.go:310] 
	I1209 16:50:18.717121    5132 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 16:50:18.717167    5132 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 16:50:18.717211    5132 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 16:50:18.717213    5132 kubeadm.go:310] 
	I1209 16:50:18.717254    5132 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 16:50:18.717305    5132 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 16:50:18.717317    5132 kubeadm.go:310] 
	I1209 16:50:18.717365    5132 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x6vu92.efyexjcrt0uobog7 \
	I1209 16:50:18.717417    5132 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb7b4eec38a0897ce971e2bba2a6b79ec587773d147d857ca417d407ce72cb1f \
	I1209 16:50:18.717430    5132 kubeadm.go:310] 	--control-plane 
	I1209 16:50:18.717433    5132 kubeadm.go:310] 
	I1209 16:50:18.717473    5132 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 16:50:18.717475    5132 kubeadm.go:310] 
	I1209 16:50:18.717512    5132 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x6vu92.efyexjcrt0uobog7 \
	I1209 16:50:18.717585    5132 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb7b4eec38a0897ce971e2bba2a6b79ec587773d147d857ca417d407ce72cb1f 
	I1209 16:50:18.717635    5132 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 16:50:18.717642    5132 cni.go:84] Creating CNI manager for ""
	I1209 16:50:18.717651    5132 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:50:18.720434    5132 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 16:50:18.726385    5132 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 16:50:18.729292    5132 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 16:50:18.734143    5132 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 16:50:18.734196    5132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 16:50:18.734199    5132 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-688000 minikube.k8s.io/updated_at=2024_12_09T16_50_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=running-upgrade-688000 minikube.k8s.io/primary=true
	I1209 16:50:18.780564    5132 ops.go:34] apiserver oom_adj: -16
	I1209 16:50:18.780562    5132 kubeadm.go:1113] duration metric: took 46.410958ms to wait for elevateKubeSystemPrivileges
	I1209 16:50:18.780685    5132 kubeadm.go:394] duration metric: took 4m11.779973583s to StartCluster
	I1209 16:50:18.780698    5132 settings.go:142] acquiring lock: {Name:mk6085b49e250ce3863979186260a283889e4dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:50:18.780802    5132 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:50:18.781212    5132 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/kubeconfig: {Name:mk5092322010dd3bee2f23e3f2812067ca57270f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:50:18.781437    5132 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:50:18.781450    5132 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 16:50:18.781485    5132 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-688000"
	I1209 16:50:18.781493    5132 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-688000"
	W1209 16:50:18.781497    5132 addons.go:243] addon storage-provisioner should already be in state true
	I1209 16:50:18.781512    5132 host.go:66] Checking if "running-upgrade-688000" exists ...
	I1209 16:50:18.781532    5132 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-688000"
	I1209 16:50:18.781554    5132 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-688000"
	I1209 16:50:18.781611    5132 config.go:182] Loaded profile config "running-upgrade-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:50:18.782633    5132 kapi.go:59] client config for running-upgrade-688000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/running-upgrade-688000/client.key", CAFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10484b740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 16:50:18.782971    5132 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-688000"
	W1209 16:50:18.782976    5132 addons.go:243] addon default-storageclass should already be in state true
	I1209 16:50:18.782984    5132 host.go:66] Checking if "running-upgrade-688000" exists ...
	I1209 16:50:18.785391    5132 out.go:177] * Verifying Kubernetes components...
	I1209 16:50:18.785792    5132 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 16:50:18.791679    5132 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 16:50:18.791687    5132 sshutil.go:53] new ssh client: &{IP:localhost Port:64956 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/running-upgrade-688000/id_rsa Username:docker}
	I1209 16:50:18.795386    5132 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:50:17.477966    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:17.478010    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:18.799378    5132 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:50:18.803373    5132 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 16:50:18.803380    5132 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 16:50:18.803387    5132 sshutil.go:53] new ssh client: &{IP:localhost Port:64956 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/running-upgrade-688000/id_rsa Username:docker}
	I1209 16:50:18.894912    5132 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 16:50:18.899774    5132 api_server.go:52] waiting for apiserver process to appear ...
	I1209 16:50:18.899832    5132 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 16:50:18.904052    5132 api_server.go:72] duration metric: took 122.60525ms to wait for apiserver process to appear ...
	I1209 16:50:18.904060    5132 api_server.go:88] waiting for apiserver healthz status ...
	I1209 16:50:18.904066    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:18.926372    5132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 16:50:18.945000    5132 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 16:50:19.270446    5132 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 16:50:19.270456    5132 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 16:50:22.480408    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:22.480492    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:23.906145    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:23.906209    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:27.483105    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:27.483125    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:28.906955    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:28.907000    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:32.485289    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:32.485312    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:33.907569    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:33.907592    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:37.487492    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:37.487762    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:50:37.510789    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:50:37.510877    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:50:37.523307    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:50:37.523383    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:50:37.534495    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:50:37.534567    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:50:37.546623    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:50:37.546696    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:50:37.561690    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:50:37.561775    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:50:37.572762    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:50:37.572836    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:50:37.583615    5393 logs.go:282] 0 containers: []
	W1209 16:50:37.583625    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:50:37.583700    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:50:37.594378    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:50:37.594397    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:50:37.594403    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:50:37.606945    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:50:37.606956    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:50:37.625779    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:50:37.625790    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:50:37.630108    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:50:37.630116    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:50:37.649135    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:50:37.649150    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:50:37.690958    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:50:37.690969    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:50:37.702648    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:50:37.702662    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:50:37.714601    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:50:37.714612    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:50:37.729799    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:50:37.729810    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:50:37.745883    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:50:37.745897    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:50:37.758274    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:50:37.758287    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:50:37.798257    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:50:37.798269    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:50:37.812660    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:50:37.812673    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:50:37.827132    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:50:37.827144    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:50:37.945101    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:50:37.945112    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:50:37.960403    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:50:37.960414    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:50:37.972180    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:50:37.972193    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:50:40.500776    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:38.908196    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:38.908226    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:45.503225    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:45.503500    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:50:45.535458    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:50:45.535590    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:50:45.550933    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:50:45.551026    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:50:45.564058    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:50:45.564137    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:50:45.575598    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:50:45.575696    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:50:45.585627    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:50:45.585694    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:50:45.596109    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:50:45.596182    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:50:45.606563    5393 logs.go:282] 0 containers: []
	W1209 16:50:45.606575    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:50:45.606646    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:50:45.617191    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:50:45.617209    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:50:45.617216    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:50:45.621623    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:50:45.621630    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:50:45.636916    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:50:45.636927    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:50:45.660975    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:50:45.660985    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:50:45.675192    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:50:45.675203    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:50:45.687524    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:50:45.687537    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:50:45.701941    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:50:45.701952    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:50:45.713023    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:50:45.713034    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:50:45.749486    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:50:45.749496    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:50:45.788532    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:50:45.788547    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:50:45.803076    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:50:45.803088    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:50:45.814638    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:50:45.814649    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:50:45.825758    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:50:45.825772    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:50:43.909055    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:43.909118    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:48.910307    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:48.910367    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1209 16:50:49.270922    5132 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1209 16:50:49.275224    5132 out.go:177] * Enabled addons: storage-provisioner
	I1209 16:50:45.864987    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:50:45.865001    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:50:45.875927    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:50:45.875938    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:50:45.890802    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:50:45.890813    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:50:45.908568    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:50:45.908581    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:50:48.422869    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:49.286995    5132 addons.go:510] duration metric: took 30.505661166s for enable addons: enabled=[storage-provisioner]
	I1209 16:50:53.425277    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:53.425486    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:50:53.442803    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:50:53.442894    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:50:53.455188    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:50:53.455267    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:50:53.466082    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:50:53.466155    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:50:53.476787    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:50:53.476867    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:50:53.486933    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:50:53.487016    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:50:53.497743    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:50:53.497813    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:50:53.508447    5393 logs.go:282] 0 containers: []
	W1209 16:50:53.508463    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:50:53.508530    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:50:53.518980    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:50:53.518998    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:50:53.519004    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:50:53.533676    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:50:53.533691    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:50:53.548782    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:50:53.548794    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:50:53.563721    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:50:53.563733    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:50:53.568137    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:50:53.568146    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:50:53.605877    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:50:53.605889    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:50:53.619162    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:50:53.619173    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:50:53.631924    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:50:53.631937    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:50:53.645782    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:50:53.645794    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:50:53.660116    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:50:53.660127    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:50:53.685621    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:50:53.685631    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:50:53.723953    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:50:53.723964    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:50:53.737670    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:50:53.737680    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:50:53.752397    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:50:53.752407    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:50:53.763345    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:50:53.763370    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:50:53.775276    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:50:53.775287    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:50:53.792840    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:50:53.792851    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:50:53.911707    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:53.911741    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:56.331579    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:58.913804    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:58.913827    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:01.333924    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:01.334079    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:01.348379    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:51:01.348460    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:01.358983    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:51:01.359049    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:01.373597    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:51:01.373678    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:01.385941    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:51:01.386017    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:01.397631    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:51:01.397722    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:01.408671    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:51:01.408749    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:01.418865    5393 logs.go:282] 0 containers: []
	W1209 16:51:01.418877    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:01.418932    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:01.429328    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:51:01.429349    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:51:01.429356    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:51:01.443505    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:51:01.443516    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:51:01.460438    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:51:01.460448    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:51:01.472085    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:51:01.472098    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:51:01.483820    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:51:01.483832    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:01.496208    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:01.496220    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:01.500483    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:01.500491    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:01.535694    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:51:01.535704    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:51:01.549732    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:51:01.549742    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:51:01.561576    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:51:01.561586    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:51:01.574029    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:51:01.574040    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:51:01.588320    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:01.588334    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:51:01.625807    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:51:01.625823    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:51:01.640681    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:51:01.640697    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:51:01.652516    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:51:01.652528    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:51:01.667697    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:51:01.667707    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:51:01.705829    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:01.705841    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:04.233639    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:03.916020    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:03.916062    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:09.236010    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:09.236183    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:09.251977    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:51:09.252074    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:09.265717    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:51:09.265799    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:09.276374    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:51:09.276439    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:09.286723    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:51:09.286815    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:09.296687    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:51:09.296765    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:09.307172    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:51:09.307250    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:09.316964    5393 logs.go:282] 0 containers: []
	W1209 16:51:09.316979    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:09.317045    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:09.327252    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:51:09.327269    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:09.327275    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:51:09.364269    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:51:09.364278    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:51:09.375725    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:09.375736    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:09.381414    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:51:09.381423    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:51:09.393140    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:51:09.393150    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:09.405639    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:51:09.405650    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:51:09.419646    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:51:09.419656    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:51:09.433605    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:51:09.433614    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:51:09.445549    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:51:09.445563    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:51:09.458136    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:51:09.458150    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:51:09.473195    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:09.473204    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:09.508905    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:51:09.508916    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:51:09.547628    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:51:09.547644    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:51:09.562834    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:51:09.562844    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:51:09.582398    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:51:09.582409    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:51:09.600048    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:51:09.600059    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:51:09.611545    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:09.611560    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:08.918342    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:08.918365    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:12.137767    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:13.920642    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:13.920693    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:17.140069    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:17.140276    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:17.156332    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:51:17.156428    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:17.168705    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:51:17.168790    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:17.179547    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:51:17.179620    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:17.190058    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:51:17.190139    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:17.200671    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:51:17.200748    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:17.211384    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:51:17.211456    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:17.221244    5393 logs.go:282] 0 containers: []
	W1209 16:51:17.221260    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:17.221326    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:17.231776    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:51:17.231792    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:51:17.231798    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:51:17.246974    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:51:17.246984    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:51:17.261604    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:17.261616    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:17.286239    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:51:17.286246    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:51:17.323777    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:51:17.323790    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:51:17.335467    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:51:17.335477    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:51:17.346792    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:51:17.346804    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:17.358664    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:17.358675    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:17.362713    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:51:17.362720    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:51:17.378105    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:51:17.378146    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:51:17.392090    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:51:17.392101    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:51:17.410077    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:51:17.410088    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:51:17.421911    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:17.421921    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:51:17.458944    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:17.458956    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:17.494784    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:51:17.494799    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:51:17.515335    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:51:17.515346    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:51:17.527078    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:51:17.527089    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:51:20.039995    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:18.923040    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:18.923255    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:18.934478    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:51:18.934556    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:18.945380    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:51:18.945459    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:18.956158    5132 logs.go:282] 2 containers: [02d8d43dfbea 64247a147667]
	I1209 16:51:18.956241    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:18.967212    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:51:18.967277    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:18.977860    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:51:18.977931    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:18.988738    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:51:18.988815    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:19.003066    5132 logs.go:282] 0 containers: []
	W1209 16:51:19.003078    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:19.003171    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:19.013761    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:51:19.013779    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:51:19.013784    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:51:19.025949    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:51:19.025961    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:51:19.047878    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:51:19.047890    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:51:19.062221    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:51:19.062232    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:51:19.077821    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:19.077831    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:19.116374    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:51:19.116386    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:51:19.131173    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:51:19.131187    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:51:19.142595    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:51:19.142610    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:51:19.161167    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:51:19.161178    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:51:19.177589    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:19.177598    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:19.201320    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:19.201328    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:51:19.235232    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:51:19.235330    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:51:19.236458    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:19.236464    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:19.241848    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:51:19.241856    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:19.253335    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:51:19.253351    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:51:19.253377    5132 out.go:270] X Problems detected in kubelet:
	W1209 16:51:19.253381    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:51:19.253384    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:51:19.253392    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:51:19.253395    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:51:25.042321    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:25.042543    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:25.060349    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:51:25.060453    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:25.073380    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:51:25.073491    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:25.084793    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:51:25.084873    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:25.094831    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:51:25.094914    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:25.105323    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:51:25.105405    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:25.115964    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:51:25.116035    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:25.126016    5393 logs.go:282] 0 containers: []
	W1209 16:51:25.126026    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:25.126083    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:25.136313    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:51:25.136339    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:51:25.136344    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:51:25.148332    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:51:25.148345    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:51:25.163341    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:51:25.163356    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:25.179581    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:25.179596    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:25.183622    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:25.183631    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:25.220292    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:51:25.220303    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:51:25.234492    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:51:25.234507    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:51:25.249082    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:25.249092    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:25.272830    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:51:25.272838    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:51:25.315156    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:51:25.315167    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:51:25.326799    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:51:25.326814    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:51:25.344183    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:51:25.344194    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:51:25.358523    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:51:25.358535    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:51:25.369260    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:51:25.369272    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:51:25.381564    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:25.381574    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:51:25.419615    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:51:25.419636    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:51:25.433902    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:51:25.433913    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:51:27.947602    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:29.257539    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:32.950331    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:32.950436    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:32.961727    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:51:32.961805    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:32.972406    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:51:32.972487    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:32.982807    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:51:32.982891    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:32.993457    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:51:32.993532    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:33.004576    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:51:33.004654    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:33.015846    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:51:33.015923    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:33.026658    5393 logs.go:282] 0 containers: []
	W1209 16:51:33.026667    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:33.026729    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:33.036943    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:51:33.036970    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:33.036976    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:51:33.074002    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:33.074012    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:33.077984    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:51:33.077991    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:51:33.116813    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:51:33.116830    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:51:33.131958    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:51:33.131968    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:51:33.143929    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:51:33.143939    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:33.155504    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:33.155516    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:33.190651    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:51:33.190663    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:51:33.204973    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:51:33.204984    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:51:33.219309    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:51:33.219320    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:51:33.231368    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:51:33.231379    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:51:33.248239    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:51:33.248251    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:51:33.271008    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:51:33.271021    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:51:33.282684    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:33.282695    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:33.305902    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:51:33.305911    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:51:33.317085    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:51:33.317097    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:51:33.328799    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:51:33.328810    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:51:35.844955    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:34.259976    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:34.260218    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:34.296265    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:51:34.296391    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:34.313509    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:51:34.313598    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:34.326717    5132 logs.go:282] 2 containers: [02d8d43dfbea 64247a147667]
	I1209 16:51:34.326802    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:34.338347    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:51:34.338418    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:34.349655    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:51:34.349725    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:34.361055    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:51:34.361129    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:34.374071    5132 logs.go:282] 0 containers: []
	W1209 16:51:34.374081    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:34.374146    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:34.385898    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:51:34.385916    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:51:34.385922    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:51:34.400883    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:34.400896    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:34.426096    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:51:34.426107    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:34.437915    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:34.437925    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:34.444374    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:34.444385    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:34.490406    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:51:34.490421    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:51:34.505106    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:51:34.505117    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:51:34.521777    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:51:34.521788    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:51:34.533890    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:51:34.533906    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:51:34.548722    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:51:34.548732    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:51:34.561050    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:51:34.561059    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:51:34.586512    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:34.586526    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:51:34.621478    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:51:34.621575    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:51:34.622728    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:51:34.622742    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:51:34.636791    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:51:34.636802    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:51:34.636829    5132 out.go:270] X Problems detected in kubelet:
	W1209 16:51:34.636836    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:51:34.636840    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:51:34.636847    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:51:34.636849    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:51:40.847368    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:40.847617    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:40.868911    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:51:40.869012    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:40.882309    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:51:40.882397    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:40.898844    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:51:40.898917    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:40.909843    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:51:40.909922    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:40.920212    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:51:40.920285    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:40.930361    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:51:40.930433    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:40.941195    5393 logs.go:282] 0 containers: []
	W1209 16:51:40.941206    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:40.941266    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:40.951840    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:51:40.951859    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:51:40.951865    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:51:40.966167    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:51:40.966180    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:51:40.977639    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:51:40.977651    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:51:40.990543    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:40.990554    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:40.994514    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:51:40.994523    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:51:41.009479    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:51:41.009490    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:51:41.027000    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:41.027013    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:41.063364    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:51:41.063375    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:51:41.077100    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:51:41.077110    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:51:41.122999    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:51:41.123011    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:51:41.134785    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:51:41.134797    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:51:41.151744    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:41.151754    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:41.176616    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:41.176624    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:51:41.214977    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:51:41.214985    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:51:41.228951    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:51:41.228962    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:51:41.240598    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:51:41.240610    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:51:41.255685    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:51:41.255696    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:43.769864    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:44.640951    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:48.772255    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:48.772432    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:48.788207    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:51:48.788284    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:48.799389    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:51:48.799476    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:48.810237    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:51:48.810315    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:48.821159    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:51:48.821240    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:48.835855    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:51:48.835934    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:48.846130    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:51:48.846201    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:48.856539    5393 logs.go:282] 0 containers: []
	W1209 16:51:48.856551    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:48.856616    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:48.867270    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:51:48.867286    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:48.867292    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:48.872498    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:48.872508    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:48.909960    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:51:48.909971    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:51:48.921818    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:51:48.921830    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:51:48.936284    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:48.936298    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:51:48.973709    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:51:48.973717    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:51:48.987644    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:51:48.987653    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:51:49.007720    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:51:49.007734    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:51:49.024939    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:51:49.024951    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:51:49.036577    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:51:49.036587    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:51:49.047641    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:49.047652    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:49.070549    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:51:49.070556    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:51:49.087815    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:51:49.087825    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:51:49.125055    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:51:49.125067    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:51:49.136089    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:51:49.136101    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:51:49.147605    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:51:49.147616    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:51:49.166263    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:51:49.166273    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:49.643318    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:49.643486    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:49.658180    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:51:49.658263    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:49.670073    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:51:49.670151    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:49.681370    5132 logs.go:282] 2 containers: [02d8d43dfbea 64247a147667]
	I1209 16:51:49.681444    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:49.692991    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:51:49.693063    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:49.704742    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:51:49.704824    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:49.716452    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:51:49.716525    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:49.727617    5132 logs.go:282] 0 containers: []
	W1209 16:51:49.727630    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:49.727694    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:49.738568    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:51:49.738583    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:49.738588    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:49.742953    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:49.742963    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:49.778803    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:51:49.778813    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:51:49.791043    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:51:49.791056    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:51:49.806897    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:49.806910    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:49.831411    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:49.831421    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:51:49.865392    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:51:49.865487    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:51:49.866582    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:51:49.866590    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:51:49.881270    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:51:49.881284    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:51:49.896403    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:51:49.896416    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:51:49.909491    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:51:49.909503    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:51:49.925726    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:51:49.925740    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:51:49.943895    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:51:49.943904    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:51:49.956057    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:51:49.956069    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:49.968433    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:51:49.968443    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:51:49.968469    5132 out.go:270] X Problems detected in kubelet:
	W1209 16:51:49.968473    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:51:49.968476    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:51:49.968479    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:51:49.968483    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:51:51.683368    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:56.685623    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:56.685732    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:56.697093    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:51:56.697177    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:56.713829    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:51:56.713897    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:56.724781    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:51:56.724864    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:56.736548    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:51:56.736622    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:56.747361    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:51:56.747430    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:56.759424    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:51:56.759489    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:56.769437    5393 logs.go:282] 0 containers: []
	W1209 16:51:56.769450    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:56.769516    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:56.780402    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:51:56.780420    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:51:56.780426    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:51:56.791944    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:56.791954    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:51:56.830221    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:56.830229    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:56.866336    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:51:56.866348    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:51:56.877868    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:51:56.877880    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:51:56.895975    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:51:56.895987    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:51:56.911872    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:51:56.911883    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:51:56.926169    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:51:56.926181    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:56.938116    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:51:56.938127    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:51:56.950641    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:56.950654    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:56.973633    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:56.973643    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:56.978260    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:51:56.978267    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:51:57.002589    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:51:57.002601    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:51:57.039546    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:51:57.039562    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:51:57.055009    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:51:57.055035    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:51:57.069259    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:51:57.069276    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:51:57.082257    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:51:57.082271    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:51:59.595774    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:59.972620    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:04.598439    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:04.598619    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:04.614749    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:52:04.614837    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:04.626099    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:52:04.626170    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:04.636553    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:52:04.636626    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:04.655106    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:52:04.655181    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:04.665465    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:52:04.665543    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:04.676276    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:52:04.676345    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:04.686545    5393 logs.go:282] 0 containers: []
	W1209 16:52:04.686556    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:04.686623    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:04.696715    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:52:04.696733    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:52:04.696738    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:52:04.715769    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:52:04.715782    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:52:04.726978    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:04.726990    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:04.731335    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:52:04.731340    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:52:04.743197    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:52:04.743207    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:52:04.757492    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:52:04.757502    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:52:04.769395    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:04.769409    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:04.814816    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:52:04.814826    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:52:04.853917    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:52:04.853932    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:52:04.867892    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:52:04.867905    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:52:04.892675    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:52:04.892689    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:04.905044    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:04.905059    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:52:04.943874    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:52:04.943884    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:52:04.957720    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:52:04.957731    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:52:04.969279    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:04.969291    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:04.993694    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:52:04.993713    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:52:05.006407    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:52:05.006422    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:52:04.974924    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:04.975027    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:04.986424    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:52:04.986509    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:04.997550    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:52:04.997640    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:05.009220    5132 logs.go:282] 2 containers: [02d8d43dfbea 64247a147667]
	I1209 16:52:05.009300    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:05.020761    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:52:05.020845    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:05.031983    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:52:05.032064    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:05.043382    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:52:05.043462    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:05.053839    5132 logs.go:282] 0 containers: []
	W1209 16:52:05.053849    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:05.053911    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:05.064645    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:52:05.064659    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:52:05.064665    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:52:05.079249    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:52:05.079260    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:52:05.093941    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:52:05.093955    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:52:05.105702    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:05.105716    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:05.130079    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:05.130089    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:52:05.164005    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:52:05.164098    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:52:05.165226    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:05.165231    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:05.169609    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:05.169619    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:05.204130    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:52:05.204142    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:52:05.221797    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:52:05.221806    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:52:05.233343    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:52:05.233355    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:05.244693    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:52:05.244702    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:52:05.259422    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:52:05.259432    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:52:05.275429    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:52:05.275439    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:52:05.291497    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:52:05.291506    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:52:05.291532    5132 out.go:270] X Problems detected in kubelet:
	W1209 16:52:05.291537    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:52:05.291540    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:52:05.291544    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:52:05.291547    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:52:07.524664    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:12.526660    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:12.526845    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:12.543505    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:52:12.543609    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:12.556147    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:52:12.556233    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:12.567341    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:52:12.567409    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:12.578629    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:52:12.578732    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:12.588982    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:52:12.589054    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:12.603364    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:52:12.603436    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:12.620859    5393 logs.go:282] 0 containers: []
	W1209 16:52:12.620871    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:12.620935    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:12.631494    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:52:12.631516    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:52:12.631522    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:52:12.642494    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:52:12.642506    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:52:12.657593    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:12.657606    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:12.681550    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:52:12.681561    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:52:12.695943    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:12.695959    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:52:12.734285    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:12.734296    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:12.772539    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:52:12.772551    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:52:12.787717    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:52:12.787729    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:52:12.805217    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:52:12.805228    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:52:12.821087    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:52:12.821099    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:52:12.836872    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:52:12.836886    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:52:12.849506    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:12.849519    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:12.854094    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:52:12.854100    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:52:12.868344    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:52:12.868354    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:52:12.905843    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:52:12.905857    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:52:12.922141    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:52:12.922152    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:52:12.933844    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:52:12.933856    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:15.448532    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:15.295754    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:20.450773    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:20.450868    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:20.462580    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:52:20.462660    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:20.474095    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:52:20.474178    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:20.489276    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:52:20.489358    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:20.500708    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:52:20.500786    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:20.512182    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:52:20.512265    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:20.523882    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:52:20.523963    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:20.534548    5393 logs.go:282] 0 containers: []
	W1209 16:52:20.534560    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:20.534628    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:20.546083    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:52:20.546102    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:52:20.546108    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:52:20.585432    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:52:20.585448    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:52:20.601369    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:52:20.601381    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:52:20.614313    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:52:20.614327    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:52:20.628179    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:20.628190    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:20.666945    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:52:20.666956    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:52:20.681853    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:52:20.681865    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:52:20.695737    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:52:20.695749    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:52:20.715395    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:52:20.715407    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:52:20.732781    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:20.732794    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:52:20.769994    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:52:20.770003    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:52:20.784179    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:52:20.784192    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:20.796698    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:20.796707    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:20.801376    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:52:20.801381    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:52:20.815880    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:52:20.815894    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:52:20.828069    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:52:20.828079    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:52:20.842747    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:20.842761    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:20.298413    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:20.298688    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:20.322368    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:52:20.322484    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:20.337950    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:52:20.338032    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:20.350755    5132 logs.go:282] 2 containers: [02d8d43dfbea 64247a147667]
	I1209 16:52:20.350840    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:20.374238    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:52:20.374307    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:20.384795    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:52:20.384882    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:20.394961    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:52:20.395035    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:20.404954    5132 logs.go:282] 0 containers: []
	W1209 16:52:20.404963    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:20.405023    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:20.415873    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:52:20.415896    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:20.415903    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:52:20.448531    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:52:20.448623    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:52:20.449764    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:20.449772    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:20.454236    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:20.454246    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:20.494044    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:52:20.494057    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:52:20.509551    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:52:20.509567    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:52:20.525375    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:52:20.525384    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:52:20.537674    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:20.537685    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:20.563456    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:52:20.563468    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:52:20.577099    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:52:20.577110    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:52:20.594891    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:52:20.594903    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:52:20.607765    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:52:20.607776    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:52:20.628618    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:52:20.628631    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:52:20.641029    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:52:20.641042    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:20.654273    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:52:20.654285    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:52:20.654314    5132 out.go:270] X Problems detected in kubelet:
	W1209 16:52:20.654320    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:52:20.654323    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:52:20.654330    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:52:20.654334    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:52:23.368823    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:28.371211    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:28.371788    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:28.411763    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:52:28.411927    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:28.432216    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:52:28.432331    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:28.450597    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:52:28.450687    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:28.463006    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:52:28.463082    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:28.473356    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:52:28.473426    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:28.483590    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:52:28.483667    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:28.493419    5393 logs.go:282] 0 containers: []
	W1209 16:52:28.493433    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:28.493490    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:28.508511    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:52:28.508528    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:52:28.508534    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:52:28.523032    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:28.523042    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:28.527257    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:52:28.527266    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:52:28.540992    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:52:28.541005    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:52:28.580056    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:52:28.580068    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:52:28.594167    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:52:28.594180    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:52:28.605079    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:52:28.605090    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:52:28.619425    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:52:28.619437    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:52:28.636573    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:28.636583    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:52:28.673601    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:28.673610    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:28.711083    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:52:28.711095    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:52:28.725226    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:28.725237    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:28.749607    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:52:28.749628    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:52:28.761778    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:52:28.761790    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:52:28.777128    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:52:28.777138    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:52:28.788710    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:52:28.788720    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:52:28.800181    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:52:28.800193    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:30.658444    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:31.314137    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:35.660732    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:35.661292    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:35.699229    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:52:35.699385    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:35.719430    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:52:35.719530    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:35.741596    5132 logs.go:282] 4 containers: [7374e35c93c9 f86fa2ef9959 02d8d43dfbea 64247a147667]
	I1209 16:52:35.741674    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:35.752853    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:52:35.752933    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:35.764060    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:52:35.764128    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:35.779138    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:52:35.779225    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:35.790023    5132 logs.go:282] 0 containers: []
	W1209 16:52:35.790033    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:35.790101    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:35.800921    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:52:35.800938    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:52:35.800944    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:52:35.818659    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:52:35.818672    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:52:35.832960    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:52:35.832970    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:52:35.845355    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:52:35.845366    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:35.860410    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:35.860422    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:35.865311    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:35.865319    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:35.900145    5132 logs.go:123] Gathering logs for coredns [f86fa2ef9959] ...
	I1209 16:52:35.900156    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86fa2ef9959"
	I1209 16:52:35.912850    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:35.912865    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:35.938641    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:52:35.938655    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:52:35.950764    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:52:35.950777    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:52:35.962850    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:35.962865    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:52:35.996330    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:52:35.996427    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:52:35.997560    5132 logs.go:123] Gathering logs for coredns [7374e35c93c9] ...
	I1209 16:52:35.997565    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7374e35c93c9"
	I1209 16:52:36.009382    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:52:36.009391    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:52:36.024168    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:52:36.024178    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:52:36.049534    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:52:36.049543    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:52:36.060830    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:52:36.060839    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:52:36.060869    5132 out.go:270] X Problems detected in kubelet:
	W1209 16:52:36.060876    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:52:36.060878    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:52:36.060882    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:52:36.060884    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:52:36.316483    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:36.316693    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:36.338841    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:52:36.338969    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:36.356815    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:52:36.356906    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:36.368874    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:52:36.368969    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:36.380351    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:52:36.380427    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:36.391214    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:52:36.391300    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:36.402146    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:52:36.402221    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:36.413285    5393 logs.go:282] 0 containers: []
	W1209 16:52:36.413295    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:36.413353    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:36.423884    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:52:36.423905    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:36.423912    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:52:36.461579    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:52:36.461591    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:52:36.477091    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:52:36.477101    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:52:36.488579    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:52:36.488592    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:36.501845    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:52:36.501857    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:52:36.539068    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:52:36.539079    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:52:36.550786    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:52:36.550798    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:52:36.562246    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:36.562257    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:36.566495    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:52:36.566501    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:52:36.581005    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:52:36.581016    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:52:36.593102    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:52:36.593114    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:52:36.608979    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:52:36.608990    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:52:36.626404    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:36.626417    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:36.662945    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:52:36.662956    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:52:36.677370    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:52:36.677383    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:52:36.693073    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:52:36.693088    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:52:36.707045    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:36.707059    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:39.231163    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:44.233456    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:44.233622    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:44.246938    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:52:44.247014    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:44.258104    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:52:44.258169    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:44.268622    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:52:44.268699    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:44.279199    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:52:44.279272    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:44.293917    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:52:44.293989    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:44.304545    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:52:44.304618    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:44.315357    5393 logs.go:282] 0 containers: []
	W1209 16:52:44.315370    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:44.315439    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:44.326432    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:52:44.326452    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:52:44.326458    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:52:44.341130    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:52:44.341142    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:52:44.353639    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:44.353650    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:52:44.391171    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:44.391188    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:44.426886    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:52:44.426900    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:52:44.441709    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:52:44.441723    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:52:44.456670    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:52:44.456681    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:52:44.469112    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:52:44.469126    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:52:44.488199    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:44.488212    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:44.492613    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:52:44.492624    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:52:44.507687    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:52:44.507702    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:52:44.522123    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:52:44.522133    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:52:44.560109    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:52:44.560118    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:52:44.575415    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:52:44.575425    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:52:44.594622    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:52:44.594633    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:52:44.605947    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:44.605959    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:44.628483    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:52:44.628491    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:46.064979    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:47.144018    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:51.065510    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:51.065700    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:51.079665    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:52:51.079756    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:51.090875    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:52:51.090951    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:51.101963    5132 logs.go:282] 4 containers: [7374e35c93c9 f86fa2ef9959 02d8d43dfbea 64247a147667]
	I1209 16:52:51.102040    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:51.112414    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:52:51.112489    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:51.126517    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:52:51.126609    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:51.145518    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:52:51.145596    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:51.155834    5132 logs.go:282] 0 containers: []
	W1209 16:52:51.155850    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:51.155911    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:51.166570    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:52:51.166587    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:51.166593    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:51.171733    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:52:51.171740    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:52:51.182967    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:52:51.182979    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:52:51.195139    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:51.195149    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:51.220301    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:51.220308    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:51.257356    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:52:51.257368    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:52:51.271804    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:52:51.271816    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:52:51.292970    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:52:51.292984    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:52:51.304390    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:52:51.304401    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:51.317140    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:51.317149    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:52:51.351147    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:52:51.351243    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:52:51.352371    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:52:51.352376    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:52:51.366786    5132 logs.go:123] Gathering logs for coredns [7374e35c93c9] ...
	I1209 16:52:51.366797    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7374e35c93c9"
	I1209 16:52:51.382117    5132 logs.go:123] Gathering logs for coredns [f86fa2ef9959] ...
	I1209 16:52:51.382129    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86fa2ef9959"
	I1209 16:52:51.393908    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:52:51.393921    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:52:51.405952    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:52:51.405964    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:52:51.422655    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:52:51.422667    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:52:51.422695    5132 out.go:270] X Problems detected in kubelet:
	W1209 16:52:51.422713    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:52:51.422717    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:52:51.422720    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:52:51.422724    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:52:52.146322    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:52.146455    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:52.158622    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:52:52.158711    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:52.169987    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:52:52.170065    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:52.181001    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:52:52.181081    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:52.191432    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:52:52.191513    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:52.203182    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:52:52.203254    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:52.213675    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:52:52.213750    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:52.223853    5393 logs.go:282] 0 containers: []
	W1209 16:52:52.223864    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:52.223933    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:52.234533    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:52:52.234552    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:52.234558    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:52.270062    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:52:52.270076    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:52:52.307106    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:52:52.307116    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:52:52.319411    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:52.319423    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:52:52.357609    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:52:52.357620    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:52:52.372407    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:52:52.372420    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:52:52.384489    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:52:52.384502    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:52:52.399717    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:52:52.399728    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:52:52.411491    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:52.411502    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:52.415603    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:52:52.415610    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:52:52.429785    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:52:52.429798    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:52:52.441656    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:52:52.441670    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:52.454089    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:52:52.454101    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:52:52.474581    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:52:52.474595    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:52:52.487506    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:52:52.487519    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:52:52.506134    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:52:52.506144    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:52:52.520822    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:52.520832    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:55.047182    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:00.049382    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:00.049650    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:00.071281    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:53:00.071416    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:00.093864    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:53:00.093965    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:00.113209    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:53:00.113307    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:00.137347    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:53:00.137431    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:00.156180    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:53:00.156285    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:00.168800    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:53:00.168885    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:00.179070    5393 logs.go:282] 0 containers: []
	W1209 16:53:00.179083    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:00.179145    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:00.195390    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:53:00.195409    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:00.195414    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:53:00.233313    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:00.233322    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:00.238071    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:53:00.238083    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:53:00.252385    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:53:00.252398    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:53:00.263990    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:53:00.264002    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:53:00.276713    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:53:00.276723    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:53:00.317440    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:53:00.317450    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:53:00.332253    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:53:00.332266    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:53:00.347616    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:00.347628    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:00.383740    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:53:00.383751    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:53:00.397743    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:53:00.397753    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:53:00.418782    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:53:00.418794    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:00.430646    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:53:00.430656    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:53:00.444683    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:53:00.444694    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:53:00.456509    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:53:00.456519    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:53:00.478921    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:53:00.478932    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:53:00.493179    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:00.493190    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:01.425223    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:03.017662    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:06.427428    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:06.427591    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:06.440963    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:53:06.441056    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:06.452247    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:53:06.452326    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:06.463139    5132 logs.go:282] 4 containers: [7374e35c93c9 f86fa2ef9959 02d8d43dfbea 64247a147667]
	I1209 16:53:06.463222    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:06.476955    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:53:06.477031    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:06.487360    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:53:06.487438    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:06.497623    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:53:06.497699    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:06.508061    5132 logs.go:282] 0 containers: []
	W1209 16:53:06.508073    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:06.508143    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:06.518951    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:53:06.518975    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:53:06.518981    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:53:06.534157    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:53:06.534167    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:53:06.550757    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:53:06.550770    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:53:06.568178    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:06.568191    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:53:06.602651    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:53:06.602745    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:53:06.603873    5132 logs.go:123] Gathering logs for coredns [7374e35c93c9] ...
	I1209 16:53:06.603882    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7374e35c93c9"
	I1209 16:53:06.619384    5132 logs.go:123] Gathering logs for coredns [f86fa2ef9959] ...
	I1209 16:53:06.619396    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86fa2ef9959"
	I1209 16:53:06.630602    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:06.630613    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:06.655906    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:53:06.655916    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:06.669172    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:06.669185    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:06.709608    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:53:06.709619    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:53:06.723441    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:53:06.723454    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:53:06.735896    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:06.735910    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:06.740562    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:53:06.740571    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:53:06.752308    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:53:06.752317    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:53:06.772094    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:53:06.772105    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:53:06.793175    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:53:06.793187    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:53:06.793214    5132 out.go:270] X Problems detected in kubelet:
	W1209 16:53:06.793219    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:53:06.793223    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:53:06.793227    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:53:06.793230    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:53:08.019989    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:08.020195    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:08.035771    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:53:08.035871    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:08.047961    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:53:08.048037    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:08.062763    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:53:08.062838    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:08.073457    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:53:08.073539    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:08.084547    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:53:08.084621    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:08.095282    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:53:08.095348    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:08.105730    5393 logs.go:282] 0 containers: []
	W1209 16:53:08.105743    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:08.105804    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:08.116178    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:53:08.116195    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:08.116201    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:53:08.153533    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:53:08.153545    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:53:08.190380    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:53:08.190391    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:53:08.201335    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:53:08.201347    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:53:08.216274    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:08.216283    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:08.240021    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:53:08.240032    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:53:08.259990    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:08.260005    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:08.264516    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:08.264525    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:08.299541    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:53:08.299558    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:53:08.313392    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:53:08.313408    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:53:08.328271    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:53:08.328281    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:53:08.345550    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:53:08.345561    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:53:08.359606    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:53:08.359617    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:53:08.370979    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:53:08.370990    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:08.383496    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:53:08.383511    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:53:08.398739    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:53:08.398750    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:53:08.413926    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:53:08.413937    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:53:10.928049    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:16.796573    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:15.930348    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:15.930640    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:15.959064    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:53:15.959178    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:15.975783    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:53:15.975867    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:15.989600    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:53:15.989681    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:16.001513    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:53:16.001592    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:16.013292    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:53:16.013367    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:16.024024    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:53:16.024100    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:16.034398    5393 logs.go:282] 0 containers: []
	W1209 16:53:16.034411    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:16.034474    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:16.045602    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:53:16.045620    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:16.045625    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:16.049832    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:16.049842    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:16.085421    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:53:16.085432    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:53:16.124207    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:53:16.124217    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:53:16.137860    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:53:16.137872    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:53:16.152076    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:53:16.152087    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:53:16.164095    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:53:16.164104    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:53:16.179449    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:53:16.179461    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:53:16.196812    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:53:16.196822    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:53:16.213448    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:53:16.213460    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:53:16.225076    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:16.225087    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:16.248344    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:53:16.248354    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:16.260349    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:16.260362    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:53:16.298221    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:53:16.298237    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:53:16.312047    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:53:16.312058    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:53:16.325853    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:53:16.325862    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:53:16.336661    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:53:16.336676    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:53:18.850099    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:21.797066    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:21.797320    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:21.835380    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:53:21.835487    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:21.849828    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:53:21.849898    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:21.866511    5132 logs.go:282] 4 containers: [7374e35c93c9 f86fa2ef9959 02d8d43dfbea 64247a147667]
	I1209 16:53:21.866593    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:21.876956    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:53:21.877037    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:21.887661    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:53:21.887726    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:21.899187    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:53:21.899261    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:21.909957    5132 logs.go:282] 0 containers: []
	W1209 16:53:21.909969    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:21.910037    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:21.920744    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:53:21.920766    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:21.920771    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:53:21.954772    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:53:21.954865    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:53:21.955923    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:21.955929    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:21.960283    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:53:21.960292    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:53:21.976613    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:21.976622    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:22.013583    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:53:22.013593    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:53:22.028433    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:53:22.028446    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:53:22.040055    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:53:22.040065    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:53:22.053111    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:53:22.053123    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:53:22.071071    5132 logs.go:123] Gathering logs for coredns [7374e35c93c9] ...
	I1209 16:53:22.071080    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7374e35c93c9"
	I1209 16:53:22.082888    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:53:22.082897    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:53:22.097556    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:22.097565    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:22.123464    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:53:22.123472    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:22.135319    5132 logs.go:123] Gathering logs for coredns [f86fa2ef9959] ...
	I1209 16:53:22.135329    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86fa2ef9959"
	I1209 16:53:22.147809    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:53:22.147820    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:53:22.159790    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:53:22.159801    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:53:22.174894    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:53:22.174904    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:53:22.174931    5132 out.go:270] X Problems detected in kubelet:
	W1209 16:53:22.174963    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:53:22.174969    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:53:22.174973    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:53:22.174980    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:53:23.850919    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:23.851175    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:23.875291    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:53:23.875436    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:23.892285    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:53:23.892384    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:23.905736    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:53:23.905813    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:23.917625    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:53:23.917709    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:23.928238    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:53:23.928312    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:23.938548    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:53:23.938620    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:23.948877    5393 logs.go:282] 0 containers: []
	W1209 16:53:23.948889    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:23.948954    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:23.959390    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:53:23.959409    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:53:23.959415    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:53:23.971377    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:53:23.971387    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:53:23.983821    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:53:23.983832    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:53:23.995682    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:23.995693    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:24.000157    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:24.000165    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:24.034894    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:53:24.034905    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:53:24.050400    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:53:24.050411    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:53:24.065365    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:53:24.065377    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:53:24.077281    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:24.077290    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:24.099958    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:53:24.099967    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:24.112431    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:53:24.112442    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:53:24.129803    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:53:24.129813    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:53:24.167672    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:53:24.167682    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:53:24.186856    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:53:24.186867    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:53:24.204731    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:53:24.204743    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:53:24.219726    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:24.219736    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:53:24.258196    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:53:24.258217    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:53:26.783055    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:32.183091    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:31.787640    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:31.787988    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:31.816948    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:53:31.817102    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:31.835266    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:53:31.835371    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:31.849757    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:53:31.849866    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:31.864925    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:53:31.865010    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:31.876970    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:53:31.877048    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:31.889632    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:53:31.889735    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:31.900287    5393 logs.go:282] 0 containers: []
	W1209 16:53:31.900299    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:31.900361    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:31.919339    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:53:31.919356    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:31.919361    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:53:31.959625    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:31.959645    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:31.964091    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:53:31.964097    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:53:31.978699    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:31.978708    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:32.001461    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:53:32.001471    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:32.014643    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:53:32.014652    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:53:32.028785    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:53:32.028794    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:53:32.066805    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:53:32.066818    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:53:32.078026    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:53:32.078037    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:53:32.089457    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:53:32.089469    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:53:32.104067    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:53:32.104077    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:53:32.115591    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:53:32.115603    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:53:32.126999    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:32.127010    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:32.161938    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:53:32.161949    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:53:32.176387    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:53:32.176398    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:53:32.191075    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:53:32.191085    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:53:32.202949    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:53:32.202961    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:53:34.723361    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:37.186894    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:37.187039    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:37.202023    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:53:37.202110    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:37.212933    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:53:37.213016    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:37.223501    5132 logs.go:282] 4 containers: [7374e35c93c9 f86fa2ef9959 02d8d43dfbea 64247a147667]
	I1209 16:53:37.223590    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:37.241996    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:53:37.242072    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:37.252421    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:53:37.252495    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:37.263037    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:53:37.263111    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:37.273358    5132 logs.go:282] 0 containers: []
	W1209 16:53:37.273369    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:37.273436    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:37.284656    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:53:37.284675    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:37.284681    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:37.289428    5132 logs.go:123] Gathering logs for coredns [7374e35c93c9] ...
	I1209 16:53:37.289435    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7374e35c93c9"
	I1209 16:53:37.301394    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:53:37.301404    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:53:37.316340    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:53:37.316351    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:53:37.328398    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:37.328411    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:39.727113    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:39.727190    5393 kubeadm.go:597] duration metric: took 4m4.066397375s to restartPrimaryControlPlane
	W1209 16:53:39.727261    5393 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 16:53:39.727285    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1209 16:53:40.721340    5393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 16:53:40.726610    5393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 16:53:40.729520    5393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 16:53:40.732151    5393 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 16:53:40.732157    5393 kubeadm.go:157] found existing configuration files:
	
	I1209 16:53:40.732192    5393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/admin.conf
	I1209 16:53:40.735662    5393 kubeadm.go:163] "https://control-plane.minikube.internal:65214" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 16:53:40.735696    5393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 16:53:40.738790    5393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/kubelet.conf
	I1209 16:53:40.741384    5393 kubeadm.go:163] "https://control-plane.minikube.internal:65214" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 16:53:40.741416    5393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 16:53:40.744010    5393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/controller-manager.conf
	I1209 16:53:40.747191    5393 kubeadm.go:163] "https://control-plane.minikube.internal:65214" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 16:53:40.747220    5393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 16:53:40.750346    5393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/scheduler.conf
	I1209 16:53:40.752675    5393 kubeadm.go:163] "https://control-plane.minikube.internal:65214" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 16:53:40.752703    5393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 16:53:40.755666    5393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 16:53:40.774216    5393 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1209 16:53:40.774276    5393 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 16:53:40.826320    5393 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 16:53:40.826381    5393 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 16:53:40.826426    5393 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 16:53:40.874317    5393 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 16:53:40.877396    5393 out.go:235]   - Generating certificates and keys ...
	I1209 16:53:40.877432    5393 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 16:53:40.877475    5393 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 16:53:40.877530    5393 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 16:53:40.877564    5393 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 16:53:40.877612    5393 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 16:53:40.877646    5393 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 16:53:40.877689    5393 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 16:53:40.877716    5393 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 16:53:40.881192    5393 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 16:53:40.881245    5393 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 16:53:40.881269    5393 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 16:53:40.881298    5393 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 16:53:40.940605    5393 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 16:53:41.097398    5393 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 16:53:41.188301    5393 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 16:53:41.314172    5393 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 16:53:41.348498    5393 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 16:53:41.348924    5393 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 16:53:41.348987    5393 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 16:53:41.428944    5393 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 16:53:37.362468    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:53:37.362481    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:53:37.376766    5132 logs.go:123] Gathering logs for coredns [f86fa2ef9959] ...
	I1209 16:53:37.376777    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86fa2ef9959"
	I1209 16:53:37.388742    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:53:37.388754    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:37.400818    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:37.400829    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:53:37.435079    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:53:37.435174    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:53:37.436301    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:53:37.436307    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:53:37.451403    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:53:37.451412    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:53:37.463628    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:53:37.463638    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:53:37.481383    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:37.481393    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:37.506527    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:53:37.506538    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:53:37.524639    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:53:37.524651    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:53:37.537884    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:53:37.537897    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:53:37.537922    5132 out.go:270] X Problems detected in kubelet:
	W1209 16:53:37.537927    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:53:37.537930    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:53:37.537943    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:53:37.537948    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:53:41.431725    5393 out.go:235]   - Booting up control plane ...
	I1209 16:53:41.431766    5393 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 16:53:41.431798    5393 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 16:53:41.431836    5393 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 16:53:41.431878    5393 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 16:53:41.431962    5393 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 16:53:45.932820    5393 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502050 seconds
	I1209 16:53:45.932911    5393 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 16:53:45.937743    5393 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 16:53:46.459781    5393 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 16:53:46.460025    5393 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-632000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 16:53:46.966161    5393 kubeadm.go:310] [bootstrap-token] Using token: munmqi.ceqq9zvy2a0d2cid
	I1209 16:53:46.972650    5393 out.go:235]   - Configuring RBAC rules ...
	I1209 16:53:46.972737    5393 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 16:53:46.972810    5393 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 16:53:46.976539    5393 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 16:53:46.977802    5393 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 16:53:46.979429    5393 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 16:53:46.980711    5393 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 16:53:46.984462    5393 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 16:53:47.171701    5393 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 16:53:47.374210    5393 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 16:53:47.374393    5393 kubeadm.go:310] 
	I1209 16:53:47.374431    5393 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 16:53:47.374436    5393 kubeadm.go:310] 
	I1209 16:53:47.374538    5393 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 16:53:47.374548    5393 kubeadm.go:310] 
	I1209 16:53:47.374560    5393 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 16:53:47.374588    5393 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 16:53:47.374619    5393 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 16:53:47.374622    5393 kubeadm.go:310] 
	I1209 16:53:47.374649    5393 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 16:53:47.374652    5393 kubeadm.go:310] 
	I1209 16:53:47.374695    5393 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 16:53:47.374697    5393 kubeadm.go:310] 
	I1209 16:53:47.374726    5393 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 16:53:47.374762    5393 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 16:53:47.374838    5393 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 16:53:47.374840    5393 kubeadm.go:310] 
	I1209 16:53:47.374944    5393 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 16:53:47.374984    5393 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 16:53:47.374989    5393 kubeadm.go:310] 
	I1209 16:53:47.375036    5393 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token munmqi.ceqq9zvy2a0d2cid \
	I1209 16:53:47.375091    5393 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb7b4eec38a0897ce971e2bba2a6b79ec587773d147d857ca417d407ce72cb1f \
	I1209 16:53:47.375103    5393 kubeadm.go:310] 	--control-plane 
	I1209 16:53:47.375106    5393 kubeadm.go:310] 
	I1209 16:53:47.375175    5393 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 16:53:47.375195    5393 kubeadm.go:310] 
	I1209 16:53:47.375278    5393 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token munmqi.ceqq9zvy2a0d2cid \
	I1209 16:53:47.375343    5393 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb7b4eec38a0897ce971e2bba2a6b79ec587773d147d857ca417d407ce72cb1f 
	I1209 16:53:47.375393    5393 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 16:53:47.375495    5393 cni.go:84] Creating CNI manager for ""
	I1209 16:53:47.375504    5393 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:53:47.379318    5393 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 16:53:47.385381    5393 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 16:53:47.388530    5393 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 16:53:47.393235    5393 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 16:53:47.393304    5393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 16:53:47.393318    5393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-632000 minikube.k8s.io/updated_at=2024_12_09T16_53_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=stopped-upgrade-632000 minikube.k8s.io/primary=true
	I1209 16:53:47.457577    5393 ops.go:34] apiserver oom_adj: -16
	I1209 16:53:47.457596    5393 kubeadm.go:1113] duration metric: took 64.328542ms to wait for elevateKubeSystemPrivileges
	I1209 16:53:47.457605    5393 kubeadm.go:394] duration metric: took 4m11.808330834s to StartCluster
	I1209 16:53:47.457613    5393 settings.go:142] acquiring lock: {Name:mk6085b49e250ce3863979186260a283889e4dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:53:47.457708    5393 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:53:47.458179    5393 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/kubeconfig: {Name:mk5092322010dd3bee2f23e3f2812067ca57270f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:53:47.458425    5393 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:53:47.458537    5393 config.go:182] Loaded profile config "stopped-upgrade-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:53:47.458485    5393 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 16:53:47.458568    5393 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-632000"
	I1209 16:53:47.458577    5393 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-632000"
	W1209 16:53:47.458581    5393 addons.go:243] addon storage-provisioner should already be in state true
	I1209 16:53:47.458593    5393 host.go:66] Checking if "stopped-upgrade-632000" exists ...
	I1209 16:53:47.458599    5393 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-632000"
	I1209 16:53:47.458609    5393 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-632000"
	I1209 16:53:47.462306    5393 out.go:177] * Verifying Kubernetes components...
	I1209 16:53:47.463007    5393 kapi.go:59] client config for stopped-upgrade-632000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/client.key", CAFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1066cf740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 16:53:47.465579    5393 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-632000"
	W1209 16:53:47.465584    5393 addons.go:243] addon default-storageclass should already be in state true
	I1209 16:53:47.465592    5393 host.go:66] Checking if "stopped-upgrade-632000" exists ...
	I1209 16:53:47.466111    5393 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 16:53:47.466116    5393 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 16:53:47.466121    5393 sshutil.go:53] new ssh client: &{IP:localhost Port:65179 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/id_rsa Username:docker}
	I1209 16:53:47.466389    5393 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:53:47.470266    5393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:53:47.473292    5393 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 16:53:47.473297    5393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 16:53:47.473302    5393 sshutil.go:53] new ssh client: &{IP:localhost Port:65179 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/id_rsa Username:docker}
	I1209 16:53:47.562899    5393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 16:53:47.567754    5393 api_server.go:52] waiting for apiserver process to appear ...
	I1209 16:53:47.567807    5393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 16:53:47.571704    5393 api_server.go:72] duration metric: took 113.252584ms to wait for apiserver process to appear ...
	I1209 16:53:47.571714    5393 api_server.go:88] waiting for apiserver healthz status ...
	I1209 16:53:47.571721    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:47.578653    5393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 16:53:47.640700    5393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 16:53:47.935849    5393 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 16:53:47.935877    5393 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 16:53:47.544035    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:52.574027    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:52.574077    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:52.546908    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:52.547289    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:52.593457    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:53:52.593600    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:52.612465    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:53:52.612555    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:52.626416    5132 logs.go:282] 4 containers: [7374e35c93c9 f86fa2ef9959 02d8d43dfbea 64247a147667]
	I1209 16:53:52.626502    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:52.638790    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:53:52.638864    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:52.652303    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:53:52.652373    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:52.663298    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:53:52.663373    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:52.673973    5132 logs.go:282] 0 containers: []
	W1209 16:53:52.673982    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:52.674041    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:52.684910    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:53:52.684926    5132 logs.go:123] Gathering logs for coredns [7374e35c93c9] ...
	I1209 16:53:52.684931    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7374e35c93c9"
	I1209 16:53:52.698068    5132 logs.go:123] Gathering logs for coredns [f86fa2ef9959] ...
	I1209 16:53:52.698079    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86fa2ef9959"
	I1209 16:53:52.710140    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:53:52.710154    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:53:52.727974    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:52.727987    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:52.732404    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:53:52.732412    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:53:52.748011    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:52.748022    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:52.772847    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:53:52.772857    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:52.785991    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:52.786001    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:53:52.819629    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:53:52.819722    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:53:52.820817    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:53:52.820825    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:53:52.832594    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:53:52.832605    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:53:52.844692    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:53:52.844704    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:53:52.856917    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:52.856928    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:52.892291    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:53:52.892304    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:53:52.906537    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:53:52.906550    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:53:52.921239    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:53:52.921250    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:53:52.938963    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:53:52.938972    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:53:52.938999    5132 out.go:270] X Problems detected in kubelet:
	W1209 16:53:52.939003    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:53:52.939006    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:53:52.939009    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:53:52.939025    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:53:57.574851    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:57.574900    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:02.575454    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:02.575492    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:02.943857    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:07.576091    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:07.576119    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:07.946381    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:07.946594    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:54:07.973848    5132 logs.go:282] 1 containers: [3946ed1a767e]
	I1209 16:54:07.973991    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:54:07.991818    5132 logs.go:282] 1 containers: [96eb4ae268e3]
	I1209 16:54:07.991912    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:54:08.005859    5132 logs.go:282] 4 containers: [7374e35c93c9 f86fa2ef9959 02d8d43dfbea 64247a147667]
	I1209 16:54:08.005944    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:54:08.017174    5132 logs.go:282] 1 containers: [5d38dadfbd16]
	I1209 16:54:08.017245    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:54:08.027878    5132 logs.go:282] 1 containers: [00507a16922f]
	I1209 16:54:08.027969    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:54:08.043360    5132 logs.go:282] 1 containers: [e3cf861a9bb6]
	I1209 16:54:08.043441    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:54:08.065054    5132 logs.go:282] 0 containers: []
	W1209 16:54:08.065071    5132 logs.go:284] No container was found matching "kindnet"
	I1209 16:54:08.065135    5132 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:54:08.077681    5132 logs.go:282] 1 containers: [1f492ad3a491]
	I1209 16:54:08.077713    5132 logs.go:123] Gathering logs for kube-scheduler [5d38dadfbd16] ...
	I1209 16:54:08.077721    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d38dadfbd16"
	I1209 16:54:08.093259    5132 logs.go:123] Gathering logs for kube-controller-manager [e3cf861a9bb6] ...
	I1209 16:54:08.093272    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e3cf861a9bb6"
	I1209 16:54:08.111029    5132 logs.go:123] Gathering logs for container status ...
	I1209 16:54:08.111039    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:54:08.123128    5132 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:54:08.123141    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:54:08.159674    5132 logs.go:123] Gathering logs for coredns [f86fa2ef9959] ...
	I1209 16:54:08.159686    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f86fa2ef9959"
	I1209 16:54:08.180493    5132 logs.go:123] Gathering logs for coredns [02d8d43dfbea] ...
	I1209 16:54:08.180505    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 02d8d43dfbea"
	I1209 16:54:08.193618    5132 logs.go:123] Gathering logs for storage-provisioner [1f492ad3a491] ...
	I1209 16:54:08.193630    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f492ad3a491"
	I1209 16:54:08.205391    5132 logs.go:123] Gathering logs for kubelet ...
	I1209 16:54:08.205401    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1209 16:54:08.239100    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:54:08.239191    5132 logs.go:138] Found kubelet problem: Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:54:08.240249    5132 logs.go:123] Gathering logs for dmesg ...
	I1209 16:54:08.240254    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:54:08.244797    5132 logs.go:123] Gathering logs for coredns [7374e35c93c9] ...
	I1209 16:54:08.244806    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7374e35c93c9"
	I1209 16:54:08.261201    5132 logs.go:123] Gathering logs for Docker ...
	I1209 16:54:08.261212    5132 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:54:08.285941    5132 logs.go:123] Gathering logs for coredns [64247a147667] ...
	I1209 16:54:08.285950    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64247a147667"
	I1209 16:54:08.298311    5132 logs.go:123] Gathering logs for kube-proxy [00507a16922f] ...
	I1209 16:54:08.298324    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00507a16922f"
	I1209 16:54:08.310014    5132 logs.go:123] Gathering logs for kube-apiserver [3946ed1a767e] ...
	I1209 16:54:08.310025    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3946ed1a767e"
	I1209 16:54:08.325276    5132 logs.go:123] Gathering logs for etcd [96eb4ae268e3] ...
	I1209 16:54:08.325286    5132 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96eb4ae268e3"
	I1209 16:54:08.339734    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:54:08.339746    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1209 16:54:08.339771    5132 out.go:270] X Problems detected in kubelet:
	W1209 16:54:08.339776    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	W1209 16:54:08.339783    5132 out.go:270]   Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	I1209 16:54:08.339787    5132 out.go:358] Setting ErrFile to fd 2...
	I1209 16:54:08.339790    5132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:54:12.576700    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:12.576733    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:17.577733    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:17.577752    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1209 16:54:17.938698    5393 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1209 16:54:17.944026    5393 out.go:177] * Enabled addons: storage-provisioner
	I1209 16:54:17.950885    5393 addons.go:510] duration metric: took 30.490595458s for enable addons: enabled=[storage-provisioner]
	I1209 16:54:18.343351    5132 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:23.345819    5132 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:23.350461    5132 out.go:201] 
	W1209 16:54:23.354355    5132 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1209 16:54:23.354367    5132 out.go:270] * 
	W1209 16:54:23.355241    5132 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:54:23.368302    5132 out.go:201] 
	I1209 16:54:22.578545    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:22.578566    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:27.579566    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:27.579625    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:32.580937    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:32.580962    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-12-10 00:45:19 UTC, ends at Tue 2024-12-10 00:54:39 UTC. --
	Dec 10 00:54:20 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:20Z" level=error msg="ContainerStats resp: {0x40005715c0 linux}"
	Dec 10 00:54:20 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:20Z" level=error msg="ContainerStats resp: {0x40004b7d40 linux}"
	Dec 10 00:54:20 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:20Z" level=error msg="ContainerStats resp: {0x4000571a40 linux}"
	Dec 10 00:54:21 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:21Z" level=error msg="ContainerStats resp: {0x4000892e40 linux}"
	Dec 10 00:54:22 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:22Z" level=error msg="ContainerStats resp: {0x4000571900 linux}"
	Dec 10 00:54:22 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:22Z" level=error msg="ContainerStats resp: {0x40004b77c0 linux}"
	Dec 10 00:54:22 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:22Z" level=error msg="ContainerStats resp: {0x4000571ec0 linux}"
	Dec 10 00:54:22 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:22Z" level=error msg="ContainerStats resp: {0x40000b9700 linux}"
	Dec 10 00:54:22 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:22Z" level=error msg="ContainerStats resp: {0x40000b9c80 linux}"
	Dec 10 00:54:22 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:22Z" level=error msg="ContainerStats resp: {0x400089c600 linux}"
	Dec 10 00:54:22 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:22Z" level=error msg="ContainerStats resp: {0x400089ca00 linux}"
	Dec 10 00:54:22 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:22Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 10 00:54:27 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:27Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 10 00:54:32 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:32Z" level=error msg="ContainerStats resp: {0x40005b4ec0 linux}"
	Dec 10 00:54:32 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:32Z" level=error msg="ContainerStats resp: {0x40008987c0 linux}"
	Dec 10 00:54:32 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:32Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 10 00:54:33 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:33Z" level=error msg="ContainerStats resp: {0x4000993040 linux}"
	Dec 10 00:54:34 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:34Z" level=error msg="ContainerStats resp: {0x4000993f00 linux}"
	Dec 10 00:54:34 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:34Z" level=error msg="ContainerStats resp: {0x4000899700 linux}"
	Dec 10 00:54:34 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:34Z" level=error msg="ContainerStats resp: {0x4000899b80 linux}"
	Dec 10 00:54:34 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:34Z" level=error msg="ContainerStats resp: {0x40008664c0 linux}"
	Dec 10 00:54:34 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:34Z" level=error msg="ContainerStats resp: {0x4000866900 linux}"
	Dec 10 00:54:34 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:34Z" level=error msg="ContainerStats resp: {0x4000866dc0 linux}"
	Dec 10 00:54:34 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:34Z" level=error msg="ContainerStats resp: {0x4000571ec0 linux}"
	Dec 10 00:54:37 running-upgrade-688000 cri-dockerd[3053]: time="2024-12-10T00:54:37Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	7ad1d67fad557       edaa71f2aee88       20 seconds ago      Running             coredns                   2                   db00e9e422e62
	5e60c0f7960fb       edaa71f2aee88       20 seconds ago      Running             coredns                   2                   5d584960bce2b
	7374e35c93c9e       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   5d584960bce2b
	f86fa2ef99597       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   db00e9e422e62
	00507a16922f6       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   7fdbea8498f6a
	1f492ad3a4917       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   f24d54ef10d7e
	e3cf861a9bb62       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   2742ae1a4186f
	96eb4ae268e3d       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   17c624c10c1b4
	3946ed1a767ed       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   cd2691966f05f
	5d38dadfbd169       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   e990478c25fe6
	
	
	==> coredns [5e60c0f7960f] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7049733779276261342.6881584659149162395. HINFO: read udp 10.244.0.2:43022->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7049733779276261342.6881584659149162395. HINFO: read udp 10.244.0.2:41887->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7049733779276261342.6881584659149162395. HINFO: read udp 10.244.0.2:43734->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7049733779276261342.6881584659149162395. HINFO: read udp 10.244.0.2:48234->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7049733779276261342.6881584659149162395. HINFO: read udp 10.244.0.2:51837->10.0.2.3:53: i/o timeout
	
	
	==> coredns [7374e35c93c9] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5403087072690680720.295672050739937856. HINFO: read udp 10.244.0.2:46311->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5403087072690680720.295672050739937856. HINFO: read udp 10.244.0.2:53811->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5403087072690680720.295672050739937856. HINFO: read udp 10.244.0.2:35826->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5403087072690680720.295672050739937856. HINFO: read udp 10.244.0.2:60165->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5403087072690680720.295672050739937856. HINFO: read udp 10.244.0.2:42406->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5403087072690680720.295672050739937856. HINFO: read udp 10.244.0.2:34486->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5403087072690680720.295672050739937856. HINFO: read udp 10.244.0.2:51533->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5403087072690680720.295672050739937856. HINFO: read udp 10.244.0.2:45013->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5403087072690680720.295672050739937856. HINFO: read udp 10.244.0.2:45040->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5403087072690680720.295672050739937856. HINFO: read udp 10.244.0.2:37936->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7ad1d67fad55] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5942031113121303864.7984867155269075599. HINFO: read udp 10.244.0.3:55777->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5942031113121303864.7984867155269075599. HINFO: read udp 10.244.0.3:33095->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5942031113121303864.7984867155269075599. HINFO: read udp 10.244.0.3:40083->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5942031113121303864.7984867155269075599. HINFO: read udp 10.244.0.3:49211->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5942031113121303864.7984867155269075599. HINFO: read udp 10.244.0.3:52269->10.0.2.3:53: i/o timeout
	
	
	==> coredns [f86fa2ef9959] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8950943841510755645.1211750065435840176. HINFO: read udp 10.244.0.3:40886->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8950943841510755645.1211750065435840176. HINFO: read udp 10.244.0.3:45870->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8950943841510755645.1211750065435840176. HINFO: read udp 10.244.0.3:47458->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8950943841510755645.1211750065435840176. HINFO: read udp 10.244.0.3:48369->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8950943841510755645.1211750065435840176. HINFO: read udp 10.244.0.3:51709->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8950943841510755645.1211750065435840176. HINFO: read udp 10.244.0.3:40203->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8950943841510755645.1211750065435840176. HINFO: read udp 10.244.0.3:54741->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8950943841510755645.1211750065435840176. HINFO: read udp 10.244.0.3:47041->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8950943841510755645.1211750065435840176. HINFO: read udp 10.244.0.3:41392->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8950943841510755645.1211750065435840176. HINFO: read udp 10.244.0.3:32865->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-688000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-688000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
	                    minikube.k8s.io/name=running-upgrade-688000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_09T16_50_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 10 Dec 2024 00:50:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-688000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 10 Dec 2024 00:54:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 10 Dec 2024 00:50:18 +0000   Tue, 10 Dec 2024 00:50:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 10 Dec 2024 00:50:18 +0000   Tue, 10 Dec 2024 00:50:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 10 Dec 2024 00:50:18 +0000   Tue, 10 Dec 2024 00:50:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 10 Dec 2024 00:50:18 +0000   Tue, 10 Dec 2024 00:50:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-688000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 132bc7466e234fb8990859994702cee9
	  System UUID:                132bc7466e234fb8990859994702cee9
	  Boot ID:                    a15b5cd7-1e60-46c0-97ec-6de614696427
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-b4v6c                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m8s
	  kube-system                 coredns-6d4b75cb6d-kbpjd                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m8s
	  kube-system                 etcd-running-upgrade-688000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m21s
	  kube-system                 kube-apiserver-running-upgrade-688000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-controller-manager-running-upgrade-688000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-proxy-754fl                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-running-upgrade-688000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m7s   kube-proxy       
	  Normal  NodeReady                4m21s  kubelet          Node running-upgrade-688000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s  kubelet          Node running-upgrade-688000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s  kubelet          Node running-upgrade-688000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s  kubelet          Node running-upgrade-688000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m21s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m8s   node-controller  Node running-upgrade-688000 event: Registered Node running-upgrade-688000 in Controller
	
	
	==> dmesg <==
	[  +1.749488] systemd-fstab-generator[877]: Ignoring "noauto" for root device
	[  +0.074560] systemd-fstab-generator[888]: Ignoring "noauto" for root device
	[  +0.080801] systemd-fstab-generator[899]: Ignoring "noauto" for root device
	[  +1.138799] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.093379] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.081120] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +2.382115] systemd-fstab-generator[1295]: Ignoring "noauto" for root device
	[  +9.675468] systemd-fstab-generator[1934]: Ignoring "noauto" for root device
	[  +2.403501] systemd-fstab-generator[2196]: Ignoring "noauto" for root device
	[  +0.144430] systemd-fstab-generator[2229]: Ignoring "noauto" for root device
	[  +0.085288] systemd-fstab-generator[2242]: Ignoring "noauto" for root device
	[  +0.092727] systemd-fstab-generator[2255]: Ignoring "noauto" for root device
	[Dec10 00:46] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.210720] systemd-fstab-generator[3009]: Ignoring "noauto" for root device
	[  +0.080669] systemd-fstab-generator[3021]: Ignoring "noauto" for root device
	[  +0.078602] systemd-fstab-generator[3032]: Ignoring "noauto" for root device
	[  +0.086377] systemd-fstab-generator[3046]: Ignoring "noauto" for root device
	[  +2.381984] systemd-fstab-generator[3200]: Ignoring "noauto" for root device
	[  +2.672870] systemd-fstab-generator[3585]: Ignoring "noauto" for root device
	[  +1.412962] systemd-fstab-generator[3887]: Ignoring "noauto" for root device
	[ +20.586574] kauditd_printk_skb: 68 callbacks suppressed
	[Dec10 00:50] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.341731] systemd-fstab-generator[11164]: Ignoring "noauto" for root device
	[  +6.156418] systemd-fstab-generator[11767]: Ignoring "noauto" for root device
	[  +0.470398] systemd-fstab-generator[11904]: Ignoring "noauto" for root device
	
	
	==> etcd [96eb4ae268e3] <==
	{"level":"info","ts":"2024-12-10T00:50:13.744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-12-10T00:50:13.744Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-12-10T00:50:13.762Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-10T00:50:13.762Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-10T00:50:13.762Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-10T00:50:13.762Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-10T00:50:13.762Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-10T00:50:14.622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-10T00:50:14.622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-10T00:50:14.622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-12-10T00:50:14.622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-12-10T00:50:14.622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-10T00:50:14.622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-12-10T00:50:14.622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-10T00:50:14.623Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:50:14.624Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:50:14.624Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:50:14.624Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-10T00:50:14.624Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-688000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-10T00:50:14.624Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T00:50:14.625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-10T00:50:14.625Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-10T00:50:14.625Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-10T00:50:14.627Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-10T00:50:14.629Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 00:54:39 up 9 min,  0 users,  load average: 0.83, 0.39, 0.18
	Linux running-upgrade-688000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [3946ed1a767e] <==
	I1210 00:50:15.873044       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1210 00:50:15.893507       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1210 00:50:15.894696       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1210 00:50:15.899594       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1210 00:50:15.899724       1 cache.go:39] Caches are synced for autoregister controller
	I1210 00:50:15.899888       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1210 00:50:15.904970       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1210 00:50:16.622052       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1210 00:50:16.801392       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1210 00:50:16.802895       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1210 00:50:16.802902       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1210 00:50:16.918562       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1210 00:50:16.930545       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1210 00:50:16.964093       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1210 00:50:16.966102       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1210 00:50:16.966416       1 controller.go:611] quota admission added evaluator for: endpoints
	I1210 00:50:16.968440       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1210 00:50:17.959861       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1210 00:50:18.509505       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1210 00:50:18.512672       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1210 00:50:18.528453       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1210 00:50:18.560360       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1210 00:50:31.570844       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1210 00:50:31.669784       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1210 00:50:32.463679       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [e3cf861a9bb6] <==
	I1210 00:50:31.460123       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1210 00:50:31.460128       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1210 00:50:31.461501       1 shared_informer.go:262] Caches are synced for stateful set
	I1210 00:50:31.461535       1 shared_informer.go:262] Caches are synced for taint
	I1210 00:50:31.461579       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1210 00:50:31.461642       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-688000. Assuming now as a timestamp.
	I1210 00:50:31.461663       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1210 00:50:31.461773       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1210 00:50:31.461829       1 event.go:294] "Event occurred" object="running-upgrade-688000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-688000 event: Registered Node running-upgrade-688000 in Controller"
	I1210 00:50:31.475777       1 shared_informer.go:262] Caches are synced for resource quota
	I1210 00:50:31.483823       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1210 00:50:31.484835       1 shared_informer.go:262] Caches are synced for persistent volume
	I1210 00:50:31.491053       1 shared_informer.go:262] Caches are synced for disruption
	I1210 00:50:31.491076       1 disruption.go:371] Sending events to api server.
	I1210 00:50:31.492140       1 shared_informer.go:262] Caches are synced for GC
	I1210 00:50:31.493247       1 shared_informer.go:262] Caches are synced for PVC protection
	I1210 00:50:31.494323       1 shared_informer.go:262] Caches are synced for job
	I1210 00:50:31.498683       1 shared_informer.go:262] Caches are synced for resource quota
	I1210 00:50:31.572560       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1210 00:50:31.673776       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-754fl"
	I1210 00:50:31.821008       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-kbpjd"
	I1210 00:50:31.823311       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-b4v6c"
	I1210 00:50:31.912396       1 shared_informer.go:262] Caches are synced for garbage collector
	I1210 00:50:31.960367       1 shared_informer.go:262] Caches are synced for garbage collector
	I1210 00:50:31.960379       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [00507a16922f] <==
	I1210 00:50:32.452242       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1210 00:50:32.452272       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1210 00:50:32.452284       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1210 00:50:32.461938       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1210 00:50:32.461950       1 server_others.go:206] "Using iptables Proxier"
	I1210 00:50:32.461969       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1210 00:50:32.462073       1 server.go:661] "Version info" version="v1.24.1"
	I1210 00:50:32.462079       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1210 00:50:32.462377       1 config.go:317] "Starting service config controller"
	I1210 00:50:32.462388       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1210 00:50:32.462400       1 config.go:226] "Starting endpoint slice config controller"
	I1210 00:50:32.462403       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1210 00:50:32.462660       1 config.go:444] "Starting node config controller"
	I1210 00:50:32.462664       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1210 00:50:32.562950       1 shared_informer.go:262] Caches are synced for node config
	I1210 00:50:32.562960       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1210 00:50:32.562969       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [5d38dadfbd16] <==
	W1210 00:50:15.864781       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1210 00:50:15.864838       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1210 00:50:15.864911       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1210 00:50:15.865086       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1210 00:50:15.864940       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1210 00:50:15.865151       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1210 00:50:15.864959       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1210 00:50:15.865253       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1210 00:50:15.864974       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1210 00:50:15.865301       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1210 00:50:15.864989       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1210 00:50:15.865366       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1210 00:50:16.673608       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1210 00:50:16.673634       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1210 00:50:16.748806       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1210 00:50:16.748831       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1210 00:50:16.801107       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1210 00:50:16.801152       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1210 00:50:16.841314       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1210 00:50:16.841549       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1210 00:50:16.863142       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1210 00:50:16.863157       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1210 00:50:16.869651       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1210 00:50:16.869704       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1210 00:50:17.162095       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-12-10 00:45:19 UTC, ends at Tue 2024-12-10 00:54:39 UTC. --
	Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: I1210 00:50:31.376787   11773 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: I1210 00:50:31.466708   11773 topology_manager.go:200] "Topology Admit Handler"
	Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: I1210 00:50:31.476810   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdj6p\" (UniqueName: \"kubernetes.io/projected/96f58c22-8888-4ca4-a3bc-382cedb97007-kube-api-access-kdj6p\") pod \"storage-provisioner\" (UID: \"96f58c22-8888-4ca4-a3bc-382cedb97007\") " pod="kube-system/storage-provisioner"
	Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: I1210 00:50:31.476833   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/96f58c22-8888-4ca4-a3bc-382cedb97007-tmp\") pod \"storage-provisioner\" (UID: \"96f58c22-8888-4ca4-a3bc-382cedb97007\") " pod="kube-system/storage-provisioner"
	Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: I1210 00:50:31.675705   11773 topology_manager.go:200] "Topology Admit Handler"
	Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: I1210 00:50:31.778256   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0de5116a-8c12-4625-a2fb-e6274801e18e-kube-proxy\") pod \"kube-proxy-754fl\" (UID: \"0de5116a-8c12-4625-a2fb-e6274801e18e\") " pod="kube-system/kube-proxy-754fl"
	Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: I1210 00:50:31.778286   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0de5116a-8c12-4625-a2fb-e6274801e18e-xtables-lock\") pod \"kube-proxy-754fl\" (UID: \"0de5116a-8c12-4625-a2fb-e6274801e18e\") " pod="kube-system/kube-proxy-754fl"
	Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: I1210 00:50:31.778298   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0de5116a-8c12-4625-a2fb-e6274801e18e-lib-modules\") pod \"kube-proxy-754fl\" (UID: \"0de5116a-8c12-4625-a2fb-e6274801e18e\") " pod="kube-system/kube-proxy-754fl"
	Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: I1210 00:50:31.778309   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fnjs\" (UniqueName: \"kubernetes.io/projected/0de5116a-8c12-4625-a2fb-e6274801e18e-kube-api-access-8fnjs\") pod \"kube-proxy-754fl\" (UID: \"0de5116a-8c12-4625-a2fb-e6274801e18e\") " pod="kube-system/kube-proxy-754fl"
	Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: I1210 00:50:31.824898   11773 topology_manager.go:200] "Topology Admit Handler"
	Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: W1210 00:50:31.827022   11773 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: E1210 00:50:31.827047   11773 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-688000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-688000' and this object
	Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: I1210 00:50:31.836355   11773 topology_manager.go:200] "Topology Admit Handler"
	Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: I1210 00:50:31.878899   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73b8c6a6-acd5-4fe8-8f36-78a3b0330deb-config-volume\") pod \"coredns-6d4b75cb6d-b4v6c\" (UID: \"73b8c6a6-acd5-4fe8-8f36-78a3b0330deb\") " pod="kube-system/coredns-6d4b75cb6d-b4v6c"
	Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: I1210 00:50:31.878996   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zc88\" (UniqueName: \"kubernetes.io/projected/73b8c6a6-acd5-4fe8-8f36-78a3b0330deb-kube-api-access-9zc88\") pod \"coredns-6d4b75cb6d-b4v6c\" (UID: \"73b8c6a6-acd5-4fe8-8f36-78a3b0330deb\") " pod="kube-system/coredns-6d4b75cb6d-b4v6c"
	Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: I1210 00:50:31.879037   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4485782-5ee1-493b-9f20-c4f050f84a17-config-volume\") pod \"coredns-6d4b75cb6d-kbpjd\" (UID: \"a4485782-5ee1-493b-9f20-c4f050f84a17\") " pod="kube-system/coredns-6d4b75cb6d-kbpjd"
	Dec 10 00:50:31 running-upgrade-688000 kubelet[11773]: I1210 00:50:31.879061   11773 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpmrq\" (UniqueName: \"kubernetes.io/projected/a4485782-5ee1-493b-9f20-c4f050f84a17-kube-api-access-hpmrq\") pod \"coredns-6d4b75cb6d-kbpjd\" (UID: \"a4485782-5ee1-493b-9f20-c4f050f84a17\") " pod="kube-system/coredns-6d4b75cb6d-kbpjd"
	Dec 10 00:50:32 running-upgrade-688000 kubelet[11773]: E1210 00:50:32.979972   11773 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Dec 10 00:50:32 running-upgrade-688000 kubelet[11773]: E1210 00:50:32.980019   11773 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/a4485782-5ee1-493b-9f20-c4f050f84a17-config-volume podName:a4485782-5ee1-493b-9f20-c4f050f84a17 nodeName:}" failed. No retries permitted until 2024-12-10 00:50:33.480007884 +0000 UTC m=+14.983786320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a4485782-5ee1-493b-9f20-c4f050f84a17-config-volume") pod "coredns-6d4b75cb6d-kbpjd" (UID: "a4485782-5ee1-493b-9f20-c4f050f84a17") : failed to sync configmap cache: timed out waiting for the condition
	Dec 10 00:50:32 running-upgrade-688000 kubelet[11773]: E1210 00:50:32.979972   11773 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	Dec 10 00:50:32 running-upgrade-688000 kubelet[11773]: E1210 00:50:32.980255   11773 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/73b8c6a6-acd5-4fe8-8f36-78a3b0330deb-config-volume podName:73b8c6a6-acd5-4fe8-8f36-78a3b0330deb nodeName:}" failed. No retries permitted until 2024-12-10 00:50:33.480248212 +0000 UTC m=+14.984026649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/73b8c6a6-acd5-4fe8-8f36-78a3b0330deb-config-volume") pod "coredns-6d4b75cb6d-b4v6c" (UID: "73b8c6a6-acd5-4fe8-8f36-78a3b0330deb") : failed to sync configmap cache: timed out waiting for the condition
	Dec 10 00:50:33 running-upgrade-688000 kubelet[11773]: I1210 00:50:33.880191   11773 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="db00e9e422e6208c5f6cd9199f967e22648a5e04450627331767e9e0732addf5"
	Dec 10 00:50:33 running-upgrade-688000 kubelet[11773]: I1210 00:50:33.881530   11773 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="5d584960bce2b238debcc3d0fa598bbe8fe10a25a0f6575092a0033123f12966"
	Dec 10 00:54:20 running-upgrade-688000 kubelet[11773]: I1210 00:54:20.364500   11773 scope.go:110] "RemoveContainer" containerID="02d8d43dfbea939714d6c5c0cd47b69d6ba2d09523e920f2f8539c132786f9fe"
	Dec 10 00:54:20 running-upgrade-688000 kubelet[11773]: I1210 00:54:20.388013   11773 scope.go:110] "RemoveContainer" containerID="64247a147667d0ea4543ed58ab673c5e01c72b4868d88de79c43ab205a191583"
	
	
	==> storage-provisioner [1f492ad3a491] <==
	I1210 00:50:32.020672       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1210 00:50:32.027677       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1210 00:50:32.027707       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1210 00:50:32.030965       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1210 00:50:32.031104       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-688000_fe4c8926-b5bd-4889-b54e-d75886018089!
	I1210 00:50:32.031521       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a36f7e3-ce06-4b6c-8dcb-e24c4fa77c3b", APIVersion:"v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-688000_fe4c8926-b5bd-4889-b54e-d75886018089 became leader
	I1210 00:50:32.131425       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-688000_fe4c8926-b5bd-4889-b54e-d75886018089!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-688000 -n running-upgrade-688000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-688000 -n running-upgrade-688000: exit status 2 (15.620311084s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-688000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-688000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-688000
--- FAIL: TestRunningBinaryUpgrade (605.30s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.8s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-418000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-418000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.883351458s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-418000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-418000" primary control-plane node in "kubernetes-upgrade-418000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-418000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:47:53.183964    5281 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:47:53.184151    5281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:47:53.184157    5281 out.go:358] Setting ErrFile to fd 2...
	I1209 16:47:53.184160    5281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:47:53.184331    5281 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:47:53.185497    5281 out.go:352] Setting JSON to false
	I1209 16:47:53.203668    5281 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4643,"bootTime":1733787030,"procs":533,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:47:53.203740    5281 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:47:53.208607    5281 out.go:177] * [kubernetes-upgrade-418000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:47:53.214913    5281 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:47:53.214935    5281 notify.go:220] Checking for updates...
	I1209 16:47:53.223317    5281 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:47:53.226255    5281 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:47:53.230313    5281 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:47:53.233361    5281 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:47:53.236320    5281 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:47:53.239747    5281 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:47:53.239837    5281 config.go:182] Loaded profile config "running-upgrade-688000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:47:53.239885    5281 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:47:53.244291    5281 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:47:53.251259    5281 start.go:297] selected driver: qemu2
	I1209 16:47:53.251266    5281 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:47:53.251271    5281 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:47:53.253893    5281 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:47:53.257342    5281 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:47:53.260372    5281 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 16:47:53.260389    5281 cni.go:84] Creating CNI manager for ""
	I1209 16:47:53.260409    5281 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1209 16:47:53.260443    5281 start.go:340] cluster config:
	{Name:kubernetes-upgrade-418000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:47:53.265224    5281 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:47:53.273237    5281 out.go:177] * Starting "kubernetes-upgrade-418000" primary control-plane node in "kubernetes-upgrade-418000" cluster
	I1209 16:47:53.277292    5281 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 16:47:53.277308    5281 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1209 16:47:53.277323    5281 cache.go:56] Caching tarball of preloaded images
	I1209 16:47:53.277402    5281 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:47:53.277407    5281 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1209 16:47:53.277462    5281 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/kubernetes-upgrade-418000/config.json ...
	I1209 16:47:53.277475    5281 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/kubernetes-upgrade-418000/config.json: {Name:mk5414e837f69b0f2b304326ea90bd05e0b8a806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:47:53.277875    5281 start.go:360] acquireMachinesLock for kubernetes-upgrade-418000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:47:53.277933    5281 start.go:364] duration metric: took 49.875µs to acquireMachinesLock for "kubernetes-upgrade-418000"
	I1209 16:47:53.277946    5281 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:47:53.277974    5281 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:47:53.286243    5281 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 16:47:53.311852    5281 start.go:159] libmachine.API.Create for "kubernetes-upgrade-418000" (driver="qemu2")
	I1209 16:47:53.311875    5281 client.go:168] LocalClient.Create starting
	I1209 16:47:53.311961    5281 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:47:53.312005    5281 main.go:141] libmachine: Decoding PEM data...
	I1209 16:47:53.312016    5281 main.go:141] libmachine: Parsing certificate...
	I1209 16:47:53.312050    5281 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:47:53.312079    5281 main.go:141] libmachine: Decoding PEM data...
	I1209 16:47:53.312088    5281 main.go:141] libmachine: Parsing certificate...
	I1209 16:47:53.312524    5281 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:47:53.483365    5281 main.go:141] libmachine: Creating SSH key...
	I1209 16:47:53.583279    5281 main.go:141] libmachine: Creating Disk image...
	I1209 16:47:53.583287    5281 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:47:53.583546    5281 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/disk.qcow2
	I1209 16:47:53.594388    5281 main.go:141] libmachine: STDOUT: 
	I1209 16:47:53.594412    5281 main.go:141] libmachine: STDERR: 
	I1209 16:47:53.594478    5281 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/disk.qcow2 +20000M
	I1209 16:47:53.603332    5281 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:47:53.603346    5281 main.go:141] libmachine: STDERR: 
	I1209 16:47:53.603367    5281 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/disk.qcow2
	I1209 16:47:53.603378    5281 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:47:53.603391    5281 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:47:53.603426    5281 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:73:55:96:05:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/disk.qcow2
	I1209 16:47:53.605326    5281 main.go:141] libmachine: STDOUT: 
	I1209 16:47:53.605343    5281 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:47:53.605363    5281 client.go:171] duration metric: took 293.483791ms to LocalClient.Create
	I1209 16:47:55.607558    5281 start.go:128] duration metric: took 2.329564334s to createHost
	I1209 16:47:55.607631    5281 start.go:83] releasing machines lock for "kubernetes-upgrade-418000", held for 2.329698541s
	W1209 16:47:55.607677    5281 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:47:55.622655    5281 out.go:177] * Deleting "kubernetes-upgrade-418000" in qemu2 ...
	W1209 16:47:55.649072    5281 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:47:55.649105    5281 start.go:729] Will try again in 5 seconds ...
	I1209 16:48:00.651403    5281 start.go:360] acquireMachinesLock for kubernetes-upgrade-418000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:48:00.651986    5281 start.go:364] duration metric: took 473.042µs to acquireMachinesLock for "kubernetes-upgrade-418000"
	I1209 16:48:00.652073    5281 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:48:00.652374    5281 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:48:00.659155    5281 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 16:48:00.707868    5281 start.go:159] libmachine.API.Create for "kubernetes-upgrade-418000" (driver="qemu2")
	I1209 16:48:00.707937    5281 client.go:168] LocalClient.Create starting
	I1209 16:48:00.708118    5281 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:48:00.708202    5281 main.go:141] libmachine: Decoding PEM data...
	I1209 16:48:00.708227    5281 main.go:141] libmachine: Parsing certificate...
	I1209 16:48:00.708297    5281 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:48:00.708354    5281 main.go:141] libmachine: Decoding PEM data...
	I1209 16:48:00.708376    5281 main.go:141] libmachine: Parsing certificate...
	I1209 16:48:00.709160    5281 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:48:00.877383    5281 main.go:141] libmachine: Creating SSH key...
	I1209 16:48:00.965452    5281 main.go:141] libmachine: Creating Disk image...
	I1209 16:48:00.965465    5281 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:48:00.965711    5281 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/disk.qcow2
	I1209 16:48:00.976232    5281 main.go:141] libmachine: STDOUT: 
	I1209 16:48:00.976248    5281 main.go:141] libmachine: STDERR: 
	I1209 16:48:00.976312    5281 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/disk.qcow2 +20000M
	I1209 16:48:00.985096    5281 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:48:00.985115    5281 main.go:141] libmachine: STDERR: 
	I1209 16:48:00.985140    5281 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/disk.qcow2
	I1209 16:48:00.985146    5281 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:48:00.985157    5281 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:48:00.985189    5281 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:b2:13:ad:65:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/disk.qcow2
	I1209 16:48:00.987191    5281 main.go:141] libmachine: STDOUT: 
	I1209 16:48:00.987214    5281 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:48:00.987227    5281 client.go:171] duration metric: took 279.275ms to LocalClient.Create
	I1209 16:48:02.989440    5281 start.go:128] duration metric: took 2.337032041s to createHost
	I1209 16:48:02.989544    5281 start.go:83] releasing machines lock for "kubernetes-upgrade-418000", held for 2.337543667s
	W1209 16:48:02.990029    5281 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:48:02.997667    5281 out.go:201] 
	W1209 16:48:03.007832    5281 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:48:03.007901    5281 out.go:270] * 
	* 
	W1209 16:48:03.010559    5281 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:48:03.019708    5281 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-418000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-418000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-418000: (3.490539417s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-418000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-418000 status --format={{.Host}}: exit status 7 (72.60625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-418000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-418000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.1852985s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-418000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-418000" primary control-plane node in "kubernetes-upgrade-418000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-418000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-418000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:48:06.633531    5324 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:48:06.633691    5324 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:48:06.633695    5324 out.go:358] Setting ErrFile to fd 2...
	I1209 16:48:06.633697    5324 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:48:06.633832    5324 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:48:06.634897    5324 out.go:352] Setting JSON to false
	I1209 16:48:06.652747    5324 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4656,"bootTime":1733787030,"procs":538,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:48:06.652824    5324 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:48:06.656274    5324 out.go:177] * [kubernetes-upgrade-418000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:48:06.663252    5324 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:48:06.663287    5324 notify.go:220] Checking for updates...
	I1209 16:48:06.672186    5324 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:48:06.675200    5324 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:48:06.678217    5324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:48:06.681260    5324 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:48:06.684235    5324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:48:06.687468    5324 config.go:182] Loaded profile config "kubernetes-upgrade-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1209 16:48:06.687719    5324 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:48:06.692251    5324 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 16:48:06.699146    5324 start.go:297] selected driver: qemu2
	I1209 16:48:06.699152    5324 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-418000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:48:06.699194    5324 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:48:06.701589    5324 cni.go:84] Creating CNI manager for ""
	I1209 16:48:06.701618    5324 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:48:06.701641    5324 start.go:340] cluster config:
	{Name:kubernetes-upgrade-418000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-418000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:48:06.705818    5324 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:48:06.714070    5324 out.go:177] * Starting "kubernetes-upgrade-418000" primary control-plane node in "kubernetes-upgrade-418000" cluster
	I1209 16:48:06.718228    5324 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:48:06.718244    5324 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:48:06.718260    5324 cache.go:56] Caching tarball of preloaded images
	I1209 16:48:06.718344    5324 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:48:06.718350    5324 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:48:06.718400    5324 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/kubernetes-upgrade-418000/config.json ...
	I1209 16:48:06.718893    5324 start.go:360] acquireMachinesLock for kubernetes-upgrade-418000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:48:06.718920    5324 start.go:364] duration metric: took 21.625µs to acquireMachinesLock for "kubernetes-upgrade-418000"
	I1209 16:48:06.718929    5324 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:48:06.718934    5324 fix.go:54] fixHost starting: 
	I1209 16:48:06.719045    5324 fix.go:112] recreateIfNeeded on kubernetes-upgrade-418000: state=Stopped err=<nil>
	W1209 16:48:06.719053    5324 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:48:06.726149    5324 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-418000" ...
	I1209 16:48:06.730197    5324 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:48:06.730242    5324 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:b2:13:ad:65:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/disk.qcow2
	I1209 16:48:06.732345    5324 main.go:141] libmachine: STDOUT: 
	I1209 16:48:06.732364    5324 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:48:06.732394    5324 fix.go:56] duration metric: took 13.460542ms for fixHost
	I1209 16:48:06.732398    5324 start.go:83] releasing machines lock for "kubernetes-upgrade-418000", held for 13.472875ms
	W1209 16:48:06.732403    5324 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:48:06.732442    5324 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:48:06.732446    5324 start.go:729] Will try again in 5 seconds ...
	I1209 16:48:11.734605    5324 start.go:360] acquireMachinesLock for kubernetes-upgrade-418000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:48:11.734835    5324 start.go:364] duration metric: took 178.584µs to acquireMachinesLock for "kubernetes-upgrade-418000"
	I1209 16:48:11.734891    5324 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:48:11.734898    5324 fix.go:54] fixHost starting: 
	I1209 16:48:11.735263    5324 fix.go:112] recreateIfNeeded on kubernetes-upgrade-418000: state=Stopped err=<nil>
	W1209 16:48:11.735278    5324 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:48:11.743598    5324 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-418000" ...
	I1209 16:48:11.748498    5324 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:48:11.748611    5324 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:b2:13:ad:65:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubernetes-upgrade-418000/disk.qcow2
	I1209 16:48:11.753119    5324 main.go:141] libmachine: STDOUT: 
	I1209 16:48:11.753152    5324 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:48:11.753193    5324 fix.go:56] duration metric: took 18.294625ms for fixHost
	I1209 16:48:11.753200    5324 start.go:83] releasing machines lock for "kubernetes-upgrade-418000", held for 18.351834ms
	W1209 16:48:11.753292    5324 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-418000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-418000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:48:11.761413    5324 out.go:201] 
	W1209 16:48:11.764586    5324 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:48:11.764596    5324 out.go:270] * 
	* 
	W1209 16:48:11.765699    5324 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:48:11.776611    5324 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-418000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-418000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-418000 version --output=json: exit status 1 (49.375375ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-418000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-12-09 16:48:11.836019 -0800 PST m=+3924.029969751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-418000 -n kubernetes-upgrade-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-418000 -n kubernetes-upgrade-418000: exit status 7 (35.709792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-418000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-418000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-418000
--- FAIL: TestKubernetesUpgrade (18.80s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.39s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=20062
- KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3851359675/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.39s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.02s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=20062
- KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3271695011/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (575.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1651117619 start -p stopped-upgrade-632000 --memory=2200 --vm-driver=qemu2 
E1209 16:48:14.059687    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:48:30.956473    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1651117619 start -p stopped-upgrade-632000 --memory=2200 --vm-driver=qemu2 : (40.672459792s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1651117619 -p stopped-upgrade-632000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1651117619 -p stopped-upgrade-632000 stop: (12.1116825s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-632000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E1209 16:51:39.901846    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 16:53:30.958803    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-632000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.944364542s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-632000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-632000" primary control-plane node in "stopped-upgrade-632000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-632000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:49:05.860785    5393 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:49:05.860964    5393 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:49:05.860968    5393 out.go:358] Setting ErrFile to fd 2...
	I1209 16:49:05.860970    5393 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:49:05.861109    5393 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:49:05.862322    5393 out.go:352] Setting JSON to false
	I1209 16:49:05.882425    5393 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4715,"bootTime":1733787030,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:49:05.882501    5393 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:49:05.886882    5393 out.go:177] * [stopped-upgrade-632000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:49:05.894778    5393 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:49:05.894824    5393 notify.go:220] Checking for updates...
	I1209 16:49:05.902768    5393 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:49:05.906768    5393 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:49:05.910776    5393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:49:05.913775    5393 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:49:05.916749    5393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:49:05.920174    5393 config.go:182] Loaded profile config "stopped-upgrade-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:49:05.923763    5393 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 16:49:05.926819    5393 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:49:05.929748    5393 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 16:49:05.936789    5393 start.go:297] selected driver: qemu2
	I1209 16:49:05.936795    5393 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:65214 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 16:49:05.936841    5393 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:49:05.939597    5393 cni.go:84] Creating CNI manager for ""
	I1209 16:49:05.939629    5393 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:49:05.939661    5393 start.go:340] cluster config:
	{Name:stopped-upgrade-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:65214 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 16:49:05.939711    5393 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:49:05.948747    5393 out.go:177] * Starting "stopped-upgrade-632000" primary control-plane node in "stopped-upgrade-632000" cluster
	I1209 16:49:05.952743    5393 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1209 16:49:05.952761    5393 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1209 16:49:05.952773    5393 cache.go:56] Caching tarball of preloaded images
	I1209 16:49:05.952848    5393 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:49:05.952857    5393 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1209 16:49:05.952912    5393 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/config.json ...
	I1209 16:49:05.953291    5393 start.go:360] acquireMachinesLock for stopped-upgrade-632000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:49:05.953331    5393 start.go:364] duration metric: took 32.75µs to acquireMachinesLock for "stopped-upgrade-632000"
	I1209 16:49:05.953344    5393 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:49:05.953349    5393 fix.go:54] fixHost starting: 
	I1209 16:49:05.953456    5393 fix.go:112] recreateIfNeeded on stopped-upgrade-632000: state=Stopped err=<nil>
	W1209 16:49:05.953464    5393 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:49:05.957823    5393 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-632000" ...
	I1209 16:49:05.965760    5393 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:49:05.965847    5393 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/qemu.pid -nic user,model=virtio,hostfwd=tcp::65179-:22,hostfwd=tcp::65180-:2376,hostname=stopped-upgrade-632000 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/disk.qcow2
	I1209 16:49:06.014528    5393 main.go:141] libmachine: STDOUT: 
	I1209 16:49:06.014561    5393 main.go:141] libmachine: STDERR: 
	I1209 16:49:06.014569    5393 main.go:141] libmachine: Waiting for VM to start (ssh -p 65179 docker@127.0.0.1)...
	I1209 16:49:26.870471    5393 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/config.json ...
	I1209 16:49:26.871245    5393 machine.go:93] provisionDockerMachine start ...
	I1209 16:49:26.871448    5393 main.go:141] libmachine: Using SSH client type: native
	I1209 16:49:26.871925    5393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c72fc0] 0x104c75800 <nil>  [] 0s} localhost 65179 <nil> <nil>}
	I1209 16:49:26.871940    5393 main.go:141] libmachine: About to run SSH command:
	hostname
	I1209 16:49:26.963189    5393 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1209 16:49:26.963224    5393 buildroot.go:166] provisioning hostname "stopped-upgrade-632000"
	I1209 16:49:26.963360    5393 main.go:141] libmachine: Using SSH client type: native
	I1209 16:49:26.963581    5393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c72fc0] 0x104c75800 <nil>  [] 0s} localhost 65179 <nil> <nil>}
	I1209 16:49:26.963595    5393 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-632000 && echo "stopped-upgrade-632000" | sudo tee /etc/hostname
	I1209 16:49:27.052748    5393 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-632000
	
	I1209 16:49:27.052842    5393 main.go:141] libmachine: Using SSH client type: native
	I1209 16:49:27.052988    5393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c72fc0] 0x104c75800 <nil>  [] 0s} localhost 65179 <nil> <nil>}
	I1209 16:49:27.053000    5393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-632000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-632000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-632000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1209 16:49:27.131946    5393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1209 16:49:27.131963    5393 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20062-1231/.minikube CaCertPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20062-1231/.minikube}
	I1209 16:49:27.131972    5393 buildroot.go:174] setting up certificates
	I1209 16:49:27.131977    5393 provision.go:84] configureAuth start
	I1209 16:49:27.131982    5393 provision.go:143] copyHostCerts
	I1209 16:49:27.132052    5393 exec_runner.go:144] found /Users/jenkins/minikube-integration/20062-1231/.minikube/cert.pem, removing ...
	I1209 16:49:27.132061    5393 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20062-1231/.minikube/cert.pem
	I1209 16:49:27.132176    5393 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20062-1231/.minikube/cert.pem (1123 bytes)
	I1209 16:49:27.132413    5393 exec_runner.go:144] found /Users/jenkins/minikube-integration/20062-1231/.minikube/key.pem, removing ...
	I1209 16:49:27.132417    5393 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20062-1231/.minikube/key.pem
	I1209 16:49:27.132470    5393 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20062-1231/.minikube/key.pem (1675 bytes)
	I1209 16:49:27.132613    5393 exec_runner.go:144] found /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.pem, removing ...
	I1209 16:49:27.132616    5393 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.pem
	I1209 16:49:27.132660    5393 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.pem (1082 bytes)
	I1209 16:49:27.132791    5393 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-632000 san=[127.0.0.1 localhost minikube stopped-upgrade-632000]
	I1209 16:49:27.330833    5393 provision.go:177] copyRemoteCerts
	I1209 16:49:27.330902    5393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1209 16:49:27.330911    5393 sshutil.go:53] new ssh client: &{IP:localhost Port:65179 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/id_rsa Username:docker}
	I1209 16:49:27.369834    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1209 16:49:27.376896    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1209 16:49:27.383666    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1209 16:49:27.390832    5393 provision.go:87] duration metric: took 258.847542ms to configureAuth
	I1209 16:49:27.390842    5393 buildroot.go:189] setting minikube options for container-runtime
	I1209 16:49:27.390949    5393 config.go:182] Loaded profile config "stopped-upgrade-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:49:27.390999    5393 main.go:141] libmachine: Using SSH client type: native
	I1209 16:49:27.391097    5393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c72fc0] 0x104c75800 <nil>  [] 0s} localhost 65179 <nil> <nil>}
	I1209 16:49:27.391102    5393 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1209 16:49:27.462336    5393 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1209 16:49:27.462344    5393 buildroot.go:70] root file system type: tmpfs
	I1209 16:49:27.462402    5393 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1209 16:49:27.462458    5393 main.go:141] libmachine: Using SSH client type: native
	I1209 16:49:27.462564    5393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c72fc0] 0x104c75800 <nil>  [] 0s} localhost 65179 <nil> <nil>}
	I1209 16:49:27.462601    5393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1209 16:49:27.537875    5393 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1209 16:49:27.537936    5393 main.go:141] libmachine: Using SSH client type: native
	I1209 16:49:27.538044    5393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c72fc0] 0x104c75800 <nil>  [] 0s} localhost 65179 <nil> <nil>}
	I1209 16:49:27.538053    5393 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1209 16:49:27.926148    5393 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1209 16:49:27.926162    5393 machine.go:96] duration metric: took 1.05491125s to provisionDockerMachine
	I1209 16:49:27.926169    5393 start.go:293] postStartSetup for "stopped-upgrade-632000" (driver="qemu2")
	I1209 16:49:27.926176    5393 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1209 16:49:27.926249    5393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1209 16:49:27.926258    5393 sshutil.go:53] new ssh client: &{IP:localhost Port:65179 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/id_rsa Username:docker}
	I1209 16:49:27.968299    5393 ssh_runner.go:195] Run: cat /etc/os-release
	I1209 16:49:27.969564    5393 info.go:137] Remote host: Buildroot 2021.02.12
	I1209 16:49:27.969572    5393 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20062-1231/.minikube/addons for local assets ...
	I1209 16:49:27.969640    5393 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20062-1231/.minikube/files for local assets ...
	I1209 16:49:27.969733    5393 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20062-1231/.minikube/files/etc/ssl/certs/17422.pem -> 17422.pem in /etc/ssl/certs
	I1209 16:49:27.969841    5393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1209 16:49:27.972344    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/files/etc/ssl/certs/17422.pem --> /etc/ssl/certs/17422.pem (1708 bytes)
	I1209 16:49:27.978855    5393 start.go:296] duration metric: took 52.681792ms for postStartSetup
	I1209 16:49:27.978868    5393 fix.go:56] duration metric: took 22.025602791s for fixHost
	I1209 16:49:27.978903    5393 main.go:141] libmachine: Using SSH client type: native
	I1209 16:49:27.978993    5393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104c72fc0] 0x104c75800 <nil>  [] 0s} localhost 65179 <nil> <nil>}
	I1209 16:49:27.978998    5393 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1209 16:49:28.049658    5393 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733791768.267420880
	
	I1209 16:49:28.049668    5393 fix.go:216] guest clock: 1733791768.267420880
	I1209 16:49:28.049672    5393 fix.go:229] Guest: 2024-12-09 16:49:28.26742088 -0800 PST Remote: 2024-12-09 16:49:27.97887 -0800 PST m=+22.148310668 (delta=288.55088ms)
	I1209 16:49:28.049684    5393 fix.go:200] guest clock delta is within tolerance: 288.55088ms
	I1209 16:49:28.049687    5393 start.go:83] releasing machines lock for "stopped-upgrade-632000", held for 22.096433333s
	I1209 16:49:28.049756    5393 ssh_runner.go:195] Run: cat /version.json
	I1209 16:49:28.049761    5393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1209 16:49:28.049765    5393 sshutil.go:53] new ssh client: &{IP:localhost Port:65179 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/id_rsa Username:docker}
	I1209 16:49:28.049779    5393 sshutil.go:53] new ssh client: &{IP:localhost Port:65179 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/id_rsa Username:docker}
	W1209 16:49:28.050283    5393 sshutil.go:64] dial failure (will retry): dial tcp [::1]:65179: connect: connection refused
	I1209 16:49:28.050306    5393 retry.go:31] will retry after 125.446271ms: dial tcp [::1]:65179: connect: connection refused
	W1209 16:49:28.216327    5393 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1209 16:49:28.216389    5393 ssh_runner.go:195] Run: systemctl --version
	I1209 16:49:28.218511    5393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1209 16:49:28.220521    5393 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1209 16:49:28.220569    5393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1209 16:49:28.223791    5393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1209 16:49:28.228921    5393 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1209 16:49:28.228931    5393 start.go:495] detecting cgroup driver to use...
	I1209 16:49:28.229029    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 16:49:28.237386    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1209 16:49:28.240475    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1209 16:49:28.243463    5393 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1209 16:49:28.243500    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1209 16:49:28.246878    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 16:49:28.250252    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1209 16:49:28.253557    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1209 16:49:28.256909    5393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1209 16:49:28.259677    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1209 16:49:28.262832    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1209 16:49:28.266184    5393 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1209 16:49:28.269600    5393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1209 16:49:28.272241    5393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1209 16:49:28.274994    5393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:49:28.354437    5393 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1209 16:49:28.360963    5393 start.go:495] detecting cgroup driver to use...
	I1209 16:49:28.361046    5393 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1209 16:49:28.370078    5393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 16:49:28.375539    5393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1209 16:49:28.385226    5393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1209 16:49:28.389663    5393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 16:49:28.393942    5393 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1209 16:49:28.440018    5393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1209 16:49:28.445021    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1209 16:49:28.450283    5393 ssh_runner.go:195] Run: which cri-dockerd
	I1209 16:49:28.451531    5393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1209 16:49:28.454665    5393 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1209 16:49:28.459831    5393 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1209 16:49:28.538896    5393 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1209 16:49:28.620514    5393 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1209 16:49:28.620577    5393 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1209 16:49:28.626164    5393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:49:28.706990    5393 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1209 16:49:29.862034    5393 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.155027583s)
	I1209 16:49:29.862111    5393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1209 16:49:29.867106    5393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1209 16:49:29.872104    5393 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1209 16:49:29.960611    5393 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1209 16:49:30.038907    5393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:49:30.101799    5393 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1209 16:49:30.107853    5393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1209 16:49:30.112289    5393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:49:30.192375    5393 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1209 16:49:30.237371    5393 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1209 16:49:30.237474    5393 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1209 16:49:30.239974    5393 start.go:563] Will wait 60s for crictl version
	I1209 16:49:30.240024    5393 ssh_runner.go:195] Run: which crictl
	I1209 16:49:30.241434    5393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1209 16:49:30.256499    5393 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1209 16:49:30.256574    5393 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1209 16:49:30.274301    5393 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1209 16:49:30.292859    5393 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1209 16:49:30.293012    5393 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1209 16:49:30.294294    5393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 16:49:30.298433    5393 kubeadm.go:883] updating cluster {Name:stopped-upgrade-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:65214 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1209 16:49:30.298480    5393 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1209 16:49:30.298527    5393 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1209 16:49:30.308990    5393 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1209 16:49:30.308999    5393 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1209 16:49:30.309058    5393 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1209 16:49:30.312128    5393 ssh_runner.go:195] Run: which lz4
	I1209 16:49:30.313369    5393 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1209 16:49:30.314674    5393 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1209 16:49:30.314690    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1209 16:49:31.264667    5393 docker.go:653] duration metric: took 951.346167ms to copy over tarball
	I1209 16:49:31.264740    5393 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1209 16:49:32.450286    5393 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.185533917s)
	I1209 16:49:32.450302    5393 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1209 16:49:32.466173    5393 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1209 16:49:32.469512    5393 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1209 16:49:32.474719    5393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:49:32.554131    5393 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1209 16:49:33.765212    5393 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.211066458s)
	I1209 16:49:33.765327    5393 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1209 16:49:33.776232    5393 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1209 16:49:33.776241    5393 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1209 16:49:33.776248    5393 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1209 16:49:33.782858    5393 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:49:33.785024    5393 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 16:49:33.786617    5393 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 16:49:33.786631    5393 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:49:33.788599    5393 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 16:49:33.788617    5393 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 16:49:33.790266    5393 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 16:49:33.790268    5393 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 16:49:33.790639    5393 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 16:49:33.791614    5393 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 16:49:33.792762    5393 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 16:49:33.792787    5393 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1209 16:49:33.792819    5393 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 16:49:33.793690    5393 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1209 16:49:33.795396    5393 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1209 16:49:33.795451    5393 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1209 16:49:34.340619    5393 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1209 16:49:34.352486    5393 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1209 16:49:34.352525    5393 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1209 16:49:34.352596    5393 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1209 16:49:34.363400    5393 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1209 16:49:34.363475    5393 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1209 16:49:34.376679    5393 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1209 16:49:34.376716    5393 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1209 16:49:34.376770    5393 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1209 16:49:34.387383    5393 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1209 16:49:34.402871    5393 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	W1209 16:49:34.413747    5393 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1209 16:49:34.413925    5393 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1209 16:49:34.414089    5393 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1209 16:49:34.414112    5393 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 16:49:34.414143    5393 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1209 16:49:34.424903    5393 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1209 16:49:34.424929    5393 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1209 16:49:34.425013    5393 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1209 16:49:34.425196    5393 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1209 16:49:34.435529    5393 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1209 16:49:34.435656    5393 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1209 16:49:34.437301    5393 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1209 16:49:34.437317    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1209 16:49:34.481287    5393 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1209 16:49:34.481304    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1209 16:49:34.519379    5393 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1209 16:49:34.536944    5393 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1209 16:49:34.547009    5393 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1209 16:49:34.547036    5393 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1209 16:49:34.547104    5393 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1209 16:49:34.557486    5393 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1209 16:49:34.574796    5393 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1209 16:49:34.585201    5393 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1209 16:49:34.585224    5393 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1209 16:49:34.585283    5393 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1209 16:49:34.595219    5393 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1209 16:49:34.595369    5393 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1209 16:49:34.596842    5393 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1209 16:49:34.596852    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1209 16:49:34.661522    5393 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1209 16:49:34.688018    5393 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1209 16:49:34.688046    5393 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1209 16:49:34.688110    5393 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1209 16:49:34.725746    5393 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1209 16:49:34.725889    5393 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1209 16:49:34.738836    5393 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1209 16:49:34.738874    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	W1209 16:49:34.747747    5393 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1209 16:49:34.747873    5393 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:49:34.774400    5393 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1209 16:49:34.774414    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1209 16:49:34.778347    5393 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1209 16:49:34.778372    5393 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:49:34.778438    5393 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:49:34.840502    5393 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1209 16:49:34.840554    5393 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1209 16:49:34.840699    5393 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1209 16:49:34.852794    5393 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1209 16:49:34.852810    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1209 16:49:34.858557    5393 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1209 16:49:34.858570    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1209 16:49:35.015291    5393 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1209 16:49:35.015321    5393 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1209 16:49:35.015330    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1209 16:49:35.253364    5393 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1209 16:49:35.253404    5393 cache_images.go:92] duration metric: took 1.477154125s to LoadCachedImages
	W1209 16:49:35.253447    5393 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1209 16:49:35.253453    5393 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1209 16:49:35.253516    5393 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-632000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1209 16:49:35.253586    5393 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1209 16:49:35.266557    5393 cni.go:84] Creating CNI manager for ""
	I1209 16:49:35.266575    5393 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:49:35.266584    5393 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1209 16:49:35.266596    5393 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-632000 NodeName:stopped-upgrade-632000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1209 16:49:35.266674    5393 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-632000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1209 16:49:35.266744    5393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1209 16:49:35.270181    5393 binaries.go:44] Found k8s binaries, skipping transfer
	I1209 16:49:35.270230    5393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1209 16:49:35.273167    5393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1209 16:49:35.278287    5393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1209 16:49:35.283520    5393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1209 16:49:35.289254    5393 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1209 16:49:35.290634    5393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1209 16:49:35.294480    5393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:49:35.369763    5393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 16:49:35.376844    5393 certs.go:68] Setting up /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000 for IP: 10.0.2.15
	I1209 16:49:35.376853    5393 certs.go:194] generating shared ca certs ...
	I1209 16:49:35.376861    5393 certs.go:226] acquiring lock for ca certs: {Name:mk94909c12771095ef5e42af3f5ec988b0b9c452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:49:35.377039    5393 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.key
	I1209 16:49:35.377797    5393 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/proxy-client-ca.key
	I1209 16:49:35.377807    5393 certs.go:256] generating profile certs ...
	I1209 16:49:35.378158    5393 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/client.key
	I1209 16:49:35.378180    5393 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.key.e690b03c
	I1209 16:49:35.378190    5393 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.crt.e690b03c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1209 16:49:35.516829    5393 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.crt.e690b03c ...
	I1209 16:49:35.516846    5393 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.crt.e690b03c: {Name:mk3830187f4b2ffcd1438f36ba321e42de5b5fd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:49:35.517463    5393 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.key.e690b03c ...
	I1209 16:49:35.517469    5393 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.key.e690b03c: {Name:mk27d3b39e4c4496cced2852ebed17b4619826bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:49:35.517659    5393 certs.go:381] copying /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.crt.e690b03c -> /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.crt
	I1209 16:49:35.517800    5393 certs.go:385] copying /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.key.e690b03c -> /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.key
	I1209 16:49:35.518151    5393 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/proxy-client.key
	I1209 16:49:35.518336    5393 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/1742.pem (1338 bytes)
	W1209 16:49:35.518570    5393 certs.go:480] ignoring /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/1742_empty.pem, impossibly tiny 0 bytes
	I1209 16:49:35.518577    5393 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca-key.pem (1679 bytes)
	I1209 16:49:35.518604    5393 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem (1082 bytes)
	I1209 16:49:35.518623    5393 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem (1123 bytes)
	I1209 16:49:35.518654    5393 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/key.pem (1675 bytes)
	I1209 16:49:35.518692    5393 certs.go:484] found cert: /Users/jenkins/minikube-integration/20062-1231/.minikube/files/etc/ssl/certs/17422.pem (1708 bytes)
	I1209 16:49:35.519061    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1209 16:49:35.526156    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1209 16:49:35.533635    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1209 16:49:35.541103    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1209 16:49:35.547493    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1209 16:49:35.554236    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1209 16:49:35.561060    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1209 16:49:35.568360    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1209 16:49:35.574837    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1209 16:49:35.581490    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/1742.pem --> /usr/share/ca-certificates/1742.pem (1338 bytes)
	I1209 16:49:35.588181    5393 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20062-1231/.minikube/files/etc/ssl/certs/17422.pem --> /usr/share/ca-certificates/17422.pem (1708 bytes)
	I1209 16:49:35.594845    5393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1209 16:49:35.600065    5393 ssh_runner.go:195] Run: openssl version
	I1209 16:49:35.601929    5393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1209 16:49:35.604918    5393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1209 16:49:35.606336    5393 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  9 23:43 /usr/share/ca-certificates/minikubeCA.pem
	I1209 16:49:35.606367    5393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1209 16:49:35.608134    5393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1209 16:49:35.611117    5393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1742.pem && ln -fs /usr/share/ca-certificates/1742.pem /etc/ssl/certs/1742.pem"
	I1209 16:49:35.614344    5393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1742.pem
	I1209 16:49:35.615658    5393 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  9 23:51 /usr/share/ca-certificates/1742.pem
	I1209 16:49:35.615686    5393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1742.pem
	I1209 16:49:35.617470    5393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1742.pem /etc/ssl/certs/51391683.0"
	I1209 16:49:35.620276    5393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17422.pem && ln -fs /usr/share/ca-certificates/17422.pem /etc/ssl/certs/17422.pem"
	I1209 16:49:35.623159    5393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17422.pem
	I1209 16:49:35.624517    5393 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  9 23:51 /usr/share/ca-certificates/17422.pem
	I1209 16:49:35.624541    5393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17422.pem
	I1209 16:49:35.626208    5393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17422.pem /etc/ssl/certs/3ec20f2e.0"
	I1209 16:49:35.629275    5393 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1209 16:49:35.630767    5393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1209 16:49:35.633205    5393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1209 16:49:35.635156    5393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1209 16:49:35.637091    5393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1209 16:49:35.638789    5393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1209 16:49:35.640750    5393 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1209 16:49:35.642535    5393 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-632000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:65214 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1209 16:49:35.642615    5393 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1209 16:49:35.652246    5393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1209 16:49:35.655435    5393 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1209 16:49:35.655442    5393 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1209 16:49:35.655474    5393 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1209 16:49:35.658811    5393 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1209 16:49:35.659110    5393 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-632000" does not appear in /Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:49:35.659213    5393 kubeconfig.go:62] /Users/jenkins/minikube-integration/20062-1231/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-632000" cluster setting kubeconfig missing "stopped-upgrade-632000" context setting]
	I1209 16:49:35.659407    5393 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/kubeconfig: {Name:mk5092322010dd3bee2f23e3f2812067ca57270f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:49:35.659821    5393 kapi.go:59] client config for stopped-upgrade-632000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/client.key", CAFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1066cf740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 16:49:35.660311    5393 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1209 16:49:35.662912    5393 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-632000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1209 16:49:35.662920    5393 kubeadm.go:1160] stopping kube-system containers ...
	I1209 16:49:35.662964    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1209 16:49:35.673854    5393 docker.go:483] Stopping containers: [040bb0a9f533 5c9cfb9c3cc2 54ad1b7454b7 6dca5a28bb4e 6b7e5d2fd21a ee48870f525d 61a24778c716 260d1e1d7b2e]
	I1209 16:49:35.673923    5393 ssh_runner.go:195] Run: docker stop 040bb0a9f533 5c9cfb9c3cc2 54ad1b7454b7 6dca5a28bb4e 6b7e5d2fd21a ee48870f525d 61a24778c716 260d1e1d7b2e
	I1209 16:49:35.684847    5393 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1209 16:49:35.690497    5393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 16:49:35.693691    5393 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 16:49:35.693696    5393 kubeadm.go:157] found existing configuration files:
	
	I1209 16:49:35.693732    5393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/admin.conf
	I1209 16:49:35.696276    5393 kubeadm.go:163] "https://control-plane.minikube.internal:65214" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 16:49:35.696301    5393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 16:49:35.698862    5393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/kubelet.conf
	I1209 16:49:35.701833    5393 kubeadm.go:163] "https://control-plane.minikube.internal:65214" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 16:49:35.701885    5393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 16:49:35.704699    5393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/controller-manager.conf
	I1209 16:49:35.707287    5393 kubeadm.go:163] "https://control-plane.minikube.internal:65214" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 16:49:35.707318    5393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 16:49:35.710384    5393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/scheduler.conf
	I1209 16:49:35.713387    5393 kubeadm.go:163] "https://control-plane.minikube.internal:65214" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 16:49:35.713424    5393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 16:49:35.715980    5393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 16:49:35.718859    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 16:49:35.741598    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 16:49:36.278882    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1209 16:49:36.411200    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1209 16:49:36.440930    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1209 16:49:36.460961    5393 api_server.go:52] waiting for apiserver process to appear ...
	I1209 16:49:36.461069    5393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 16:49:36.963334    5393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 16:49:37.463136    5393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 16:49:37.467465    5393 api_server.go:72] duration metric: took 1.006509916s to wait for apiserver process to appear ...
	I1209 16:49:37.467475    5393 api_server.go:88] waiting for apiserver healthz status ...
	I1209 16:49:37.467494    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:42.469706    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:42.469809    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:47.470580    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:47.470606    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:52.471079    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:52.471139    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:49:57.471796    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:49:57.471878    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:02.472913    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:02.472956    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:07.474468    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:07.474510    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:12.476018    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:12.476039    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:17.477966    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:17.478010    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:22.480408    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:22.480492    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:27.483105    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:27.483125    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:32.485289    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:32.485312    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:37.487492    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:37.487762    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:50:37.510789    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:50:37.510877    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:50:37.523307    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:50:37.523383    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:50:37.534495    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:50:37.534567    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:50:37.546623    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:50:37.546696    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:50:37.561690    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:50:37.561775    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:50:37.572762    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:50:37.572836    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:50:37.583615    5393 logs.go:282] 0 containers: []
	W1209 16:50:37.583625    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:50:37.583700    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:50:37.594378    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:50:37.594397    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:50:37.594403    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:50:37.606945    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:50:37.606956    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:50:37.625779    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:50:37.625790    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:50:37.630108    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:50:37.630116    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:50:37.649135    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:50:37.649150    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:50:37.690958    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:50:37.690969    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:50:37.702648    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:50:37.702662    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:50:37.714601    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:50:37.714612    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:50:37.729799    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:50:37.729810    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:50:37.745883    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:50:37.745897    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:50:37.758274    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:50:37.758287    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:50:37.798257    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:50:37.798269    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:50:37.812660    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:50:37.812673    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:50:37.827132    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:50:37.827144    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:50:37.945101    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:50:37.945112    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:50:37.960403    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:50:37.960414    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:50:37.972180    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:50:37.972193    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:50:40.500776    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:45.503225    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:45.503500    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:50:45.535458    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:50:45.535590    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:50:45.550933    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:50:45.551026    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:50:45.564058    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:50:45.564137    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:50:45.575598    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:50:45.575696    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:50:45.585627    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:50:45.585694    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:50:45.596109    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:50:45.596182    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:50:45.606563    5393 logs.go:282] 0 containers: []
	W1209 16:50:45.606575    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:50:45.606646    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:50:45.617191    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:50:45.617209    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:50:45.617216    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:50:45.621623    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:50:45.621630    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:50:45.636916    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:50:45.636927    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:50:45.660975    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:50:45.660985    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:50:45.675192    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:50:45.675203    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:50:45.687524    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:50:45.687537    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:50:45.701941    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:50:45.701952    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:50:45.713023    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:50:45.713034    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:50:45.749486    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:50:45.749496    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:50:45.788532    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:50:45.788547    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:50:45.803076    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:50:45.803088    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:50:45.814638    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:50:45.814649    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:50:45.825758    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:50:45.825772    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:50:45.864987    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:50:45.865001    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:50:45.875927    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:50:45.875938    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:50:45.890802    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:50:45.890813    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:50:45.908568    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:50:45.908581    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:50:48.422869    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:50:53.425277    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:50:53.425486    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:50:53.442803    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:50:53.442894    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:50:53.455188    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:50:53.455267    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:50:53.466082    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:50:53.466155    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:50:53.476787    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:50:53.476867    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:50:53.486933    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:50:53.487016    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:50:53.497743    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:50:53.497813    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:50:53.508447    5393 logs.go:282] 0 containers: []
	W1209 16:50:53.508463    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:50:53.508530    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:50:53.518980    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:50:53.518998    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:50:53.519004    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:50:53.533676    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:50:53.533691    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:50:53.548782    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:50:53.548794    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:50:53.563721    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:50:53.563733    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:50:53.568137    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:50:53.568146    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:50:53.605877    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:50:53.605889    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:50:53.619162    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:50:53.619173    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:50:53.631924    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:50:53.631937    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:50:53.645782    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:50:53.645794    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:50:53.660116    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:50:53.660127    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:50:53.685621    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:50:53.685631    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:50:53.723953    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:50:53.723964    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:50:53.737670    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:50:53.737680    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:50:53.752397    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:50:53.752407    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:50:53.763345    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:50:53.763370    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:50:53.775276    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:50:53.775287    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:50:53.792840    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:50:53.792851    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:50:56.331579    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:01.333924    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:01.334079    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:01.348379    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:51:01.348460    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:01.358983    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:51:01.359049    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:01.373597    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:51:01.373678    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:01.385941    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:51:01.386017    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:01.397631    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:51:01.397722    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:01.408671    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:51:01.408749    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:01.418865    5393 logs.go:282] 0 containers: []
	W1209 16:51:01.418877    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:01.418932    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:01.429328    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:51:01.429349    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:51:01.429356    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:51:01.443505    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:51:01.443516    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:51:01.460438    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:51:01.460448    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:51:01.472085    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:51:01.472098    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:51:01.483820    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:51:01.483832    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:01.496208    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:01.496220    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:01.500483    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:01.500491    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:01.535694    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:51:01.535704    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:51:01.549732    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:51:01.549742    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:51:01.561576    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:51:01.561586    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:51:01.574029    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:51:01.574040    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:51:01.588320    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:01.588334    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:51:01.625807    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:51:01.625823    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:51:01.640681    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:51:01.640697    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:51:01.652516    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:51:01.652528    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:51:01.667697    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:51:01.667707    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:51:01.705829    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:01.705841    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:04.233639    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:09.236010    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:09.236183    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:09.251977    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:51:09.252074    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:09.265717    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:51:09.265799    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:09.276374    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:51:09.276439    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:09.286723    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:51:09.286815    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:09.296687    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:51:09.296765    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:09.307172    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:51:09.307250    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:09.316964    5393 logs.go:282] 0 containers: []
	W1209 16:51:09.316979    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:09.317045    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:09.327252    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:51:09.327269    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:09.327275    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:51:09.364269    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:51:09.364278    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:51:09.375725    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:09.375736    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:09.381414    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:51:09.381423    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:51:09.393140    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:51:09.393150    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:09.405639    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:51:09.405650    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:51:09.419646    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:51:09.419656    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:51:09.433605    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:51:09.433614    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:51:09.445549    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:51:09.445563    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:51:09.458136    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:51:09.458150    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:51:09.473195    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:09.473204    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:09.508905    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:51:09.508916    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:51:09.547628    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:51:09.547644    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:51:09.562834    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:51:09.562844    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:51:09.582398    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:51:09.582409    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:51:09.600048    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:51:09.600059    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:51:09.611545    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:09.611560    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:12.137767    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:17.140069    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:17.140276    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:17.156332    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:51:17.156428    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:17.168705    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:51:17.168790    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:17.179547    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:51:17.179620    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:17.190058    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:51:17.190139    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:17.200671    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:51:17.200748    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:17.211384    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:51:17.211456    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:17.221244    5393 logs.go:282] 0 containers: []
	W1209 16:51:17.221260    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:17.221326    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:17.231776    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:51:17.231792    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:51:17.231798    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:51:17.246974    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:51:17.246984    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:51:17.261604    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:17.261616    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:17.286239    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:51:17.286246    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:51:17.323777    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:51:17.323790    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:51:17.335467    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:51:17.335477    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:51:17.346792    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:51:17.346804    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:17.358664    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:17.358675    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:17.362713    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:51:17.362720    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:51:17.378105    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:51:17.378146    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:51:17.392090    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:51:17.392101    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:51:17.410077    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:51:17.410088    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:51:17.421911    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:17.421921    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:51:17.458944    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:17.458956    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:17.494784    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:51:17.494799    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:51:17.515335    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:51:17.515346    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:51:17.527078    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:51:17.527089    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:51:20.039995    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:25.042321    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:25.042543    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:25.060349    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:51:25.060453    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:25.073380    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:51:25.073491    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:25.084793    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:51:25.084873    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:25.094831    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:51:25.094914    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:25.105323    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:51:25.105405    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:25.115964    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:51:25.116035    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:25.126016    5393 logs.go:282] 0 containers: []
	W1209 16:51:25.126026    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:25.126083    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:25.136313    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:51:25.136339    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:51:25.136344    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:51:25.148332    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:51:25.148345    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:51:25.163341    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:51:25.163356    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:25.179581    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:25.179596    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:25.183622    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:25.183631    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:25.220292    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:51:25.220303    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:51:25.234492    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:51:25.234507    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:51:25.249082    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:25.249092    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:25.272830    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:51:25.272838    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:51:25.315156    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:51:25.315167    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:51:25.326799    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:51:25.326814    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:51:25.344183    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:51:25.344194    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:51:25.358523    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:51:25.358535    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:51:25.369260    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:51:25.369272    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:51:25.381564    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:25.381574    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:51:25.419615    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:51:25.419636    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:51:25.433902    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:51:25.433913    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:51:27.947602    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:32.950331    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:32.950436    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:32.961727    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:51:32.961805    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:32.972406    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:51:32.972487    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:32.982807    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:51:32.982891    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:32.993457    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:51:32.993532    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:33.004576    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:51:33.004654    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:33.015846    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:51:33.015923    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:33.026658    5393 logs.go:282] 0 containers: []
	W1209 16:51:33.026667    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:33.026729    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:33.036943    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:51:33.036970    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:33.036976    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:51:33.074002    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:33.074012    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:33.077984    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:51:33.077991    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:51:33.116813    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:51:33.116830    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:51:33.131958    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:51:33.131968    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:51:33.143929    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:51:33.143939    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:33.155504    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:33.155516    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:33.190651    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:51:33.190663    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:51:33.204973    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:51:33.204984    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:51:33.219309    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:51:33.219320    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:51:33.231368    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:51:33.231379    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:51:33.248239    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:51:33.248251    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:51:33.271008    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:51:33.271021    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:51:33.282684    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:33.282695    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:33.305902    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:51:33.305911    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:51:33.317085    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:51:33.317097    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:51:33.328799    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:51:33.328810    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:51:35.844955    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:40.847368    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:40.847617    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:40.868911    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:51:40.869012    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:40.882309    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:51:40.882397    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:40.898844    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:51:40.898917    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:40.909843    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:51:40.909922    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:40.920212    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:51:40.920285    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:40.930361    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:51:40.930433    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:40.941195    5393 logs.go:282] 0 containers: []
	W1209 16:51:40.941206    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:40.941266    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:40.951840    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:51:40.951859    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:51:40.951865    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:51:40.966167    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:51:40.966180    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:51:40.977639    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:51:40.977651    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:51:40.990543    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:40.990554    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:40.994514    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:51:40.994523    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:51:41.009479    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:51:41.009490    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:51:41.027000    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:41.027013    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:41.063364    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:51:41.063375    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:51:41.077100    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:51:41.077110    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:51:41.122999    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:51:41.123011    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:51:41.134785    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:51:41.134797    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:51:41.151744    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:41.151754    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:41.176616    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:41.176624    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:51:41.214977    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:51:41.214985    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:51:41.228951    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:51:41.228962    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:51:41.240598    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:51:41.240610    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:51:41.255685    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:51:41.255696    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:43.769864    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:48.772255    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:48.772432    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:48.788207    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:51:48.788284    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:48.799389    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:51:48.799476    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:48.810237    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:51:48.810315    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:48.821159    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:51:48.821240    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:48.835855    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:51:48.835934    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:48.846130    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:51:48.846201    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:48.856539    5393 logs.go:282] 0 containers: []
	W1209 16:51:48.856551    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:48.856616    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:48.867270    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:51:48.867286    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:48.867292    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:48.872498    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:48.872508    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:48.909960    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:51:48.909971    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:51:48.921818    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:51:48.921830    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:51:48.936284    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:48.936298    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:51:48.973709    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:51:48.973717    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:51:48.987644    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:51:48.987653    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:51:49.007720    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:51:49.007734    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:51:49.024939    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:51:49.024951    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:51:49.036577    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:51:49.036587    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:51:49.047641    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:49.047652    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:49.070549    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:51:49.070556    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:51:49.087815    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:51:49.087825    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:51:49.125055    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:51:49.125067    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:51:49.136089    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:51:49.136101    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:51:49.147605    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:51:49.147616    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:51:49.166263    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:51:49.166273    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:51.683368    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:51:56.685623    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:51:56.685732    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:51:56.697093    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:51:56.697177    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:51:56.713829    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:51:56.713897    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:51:56.724781    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:51:56.724864    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:51:56.736548    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:51:56.736622    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:51:56.747361    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:51:56.747430    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:51:56.759424    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:51:56.759489    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:51:56.769437    5393 logs.go:282] 0 containers: []
	W1209 16:51:56.769450    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:51:56.769516    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:51:56.780402    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:51:56.780420    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:51:56.780426    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:51:56.791944    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:51:56.791954    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:51:56.830221    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:51:56.830229    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:51:56.866336    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:51:56.866348    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:51:56.877868    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:51:56.877880    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:51:56.895975    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:51:56.895987    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:51:56.911872    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:51:56.911883    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:51:56.926169    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:51:56.926181    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:51:56.938116    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:51:56.938127    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:51:56.950641    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:51:56.950654    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:51:56.973633    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:51:56.973643    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:51:56.978260    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:51:56.978267    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:51:57.002589    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:51:57.002601    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:51:57.039546    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:51:57.039562    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:51:57.055009    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:51:57.055035    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:51:57.069259    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:51:57.069276    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:51:57.082257    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:51:57.082271    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:51:59.595774    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:04.598439    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:04.598619    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:04.614749    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:52:04.614837    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:04.626099    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:52:04.626170    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:04.636553    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:52:04.636626    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:04.655106    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:52:04.655181    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:04.665465    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:52:04.665543    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:04.676276    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:52:04.676345    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:04.686545    5393 logs.go:282] 0 containers: []
	W1209 16:52:04.686556    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:04.686623    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:04.696715    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:52:04.696733    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:52:04.696738    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:52:04.715769    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:52:04.715782    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:52:04.726978    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:04.726990    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:04.731335    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:52:04.731340    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:52:04.743197    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:52:04.743207    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:52:04.757492    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:52:04.757502    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:52:04.769395    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:04.769409    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:04.814816    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:52:04.814826    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:52:04.853917    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:52:04.853932    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:52:04.867892    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:52:04.867905    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:52:04.892675    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:52:04.892689    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:04.905044    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:04.905059    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:52:04.943874    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:52:04.943884    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:52:04.957720    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:52:04.957731    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:52:04.969279    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:04.969291    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:04.993694    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:52:04.993713    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:52:05.006407    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:52:05.006422    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:52:07.524664    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:12.526660    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:12.526845    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:12.543505    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:52:12.543609    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:12.556147    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:52:12.556233    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:12.567341    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:52:12.567409    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:12.578629    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:52:12.578732    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:12.588982    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:52:12.589054    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:12.603364    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:52:12.603436    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:12.620859    5393 logs.go:282] 0 containers: []
	W1209 16:52:12.620871    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:12.620935    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:12.631494    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:52:12.631516    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:52:12.631522    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:52:12.642494    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:52:12.642506    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:52:12.657593    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:12.657606    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:12.681550    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:52:12.681561    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:52:12.695943    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:12.695959    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:52:12.734285    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:12.734296    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:12.772539    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:52:12.772551    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:52:12.787717    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:52:12.787729    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:52:12.805217    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:52:12.805228    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:52:12.821087    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:52:12.821099    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:52:12.836872    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:52:12.836886    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:52:12.849506    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:12.849519    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:12.854094    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:52:12.854100    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:52:12.868344    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:52:12.868354    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:52:12.905843    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:52:12.905857    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:52:12.922141    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:52:12.922152    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:52:12.933844    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:52:12.933856    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:15.448532    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:20.450773    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:20.450868    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:20.462580    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:52:20.462660    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:20.474095    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:52:20.474178    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:20.489276    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:52:20.489358    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:20.500708    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:52:20.500786    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:20.512182    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:52:20.512265    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:20.523882    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:52:20.523963    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:20.534548    5393 logs.go:282] 0 containers: []
	W1209 16:52:20.534560    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:20.534628    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:20.546083    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:52:20.546102    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:52:20.546108    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:52:20.585432    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:52:20.585448    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:52:20.601369    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:52:20.601381    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:52:20.614313    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:52:20.614327    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:52:20.628179    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:20.628190    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:20.666945    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:52:20.666956    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:52:20.681853    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:52:20.681865    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:52:20.695737    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:52:20.695749    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:52:20.715395    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:52:20.715407    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:52:20.732781    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:20.732794    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:52:20.769994    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:52:20.770003    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:52:20.784179    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:52:20.784192    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:20.796698    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:20.796707    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:20.801376    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:52:20.801381    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:52:20.815880    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:52:20.815894    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:52:20.828069    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:52:20.828079    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:52:20.842747    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:20.842761    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:23.368823    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:28.371211    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:28.371788    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:28.411763    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:52:28.411927    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:28.432216    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:52:28.432331    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:28.450597    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:52:28.450687    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:28.463006    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:52:28.463082    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:28.473356    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:52:28.473426    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:28.483590    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:52:28.483667    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:28.493419    5393 logs.go:282] 0 containers: []
	W1209 16:52:28.493433    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:28.493490    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:28.508511    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:52:28.508528    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:52:28.508534    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:52:28.523032    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:28.523042    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:28.527257    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:52:28.527266    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:52:28.540992    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:52:28.541005    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:52:28.580056    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:52:28.580068    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:52:28.594167    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:52:28.594180    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:52:28.605079    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:52:28.605090    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:52:28.619425    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:52:28.619437    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:52:28.636573    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:28.636583    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:52:28.673601    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:28.673610    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:28.711083    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:52:28.711095    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:52:28.725226    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:28.725237    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:28.749607    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:52:28.749628    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:52:28.761778    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:52:28.761790    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:52:28.777128    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:52:28.777138    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:52:28.788710    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:52:28.788720    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:52:28.800181    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:52:28.800193    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:31.314137    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:36.316483    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:36.316693    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:36.338841    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:52:36.338969    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:36.356815    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:52:36.356906    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:36.368874    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:52:36.368969    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:36.380351    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:52:36.380427    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:36.391214    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:52:36.391300    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:36.402146    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:52:36.402221    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:36.413285    5393 logs.go:282] 0 containers: []
	W1209 16:52:36.413295    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:36.413353    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:36.423884    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:52:36.423905    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:36.423912    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:52:36.461579    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:52:36.461591    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:52:36.477091    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:52:36.477101    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:52:36.488579    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:52:36.488592    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:36.501845    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:52:36.501857    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:52:36.539068    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:52:36.539079    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:52:36.550786    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:52:36.550798    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:52:36.562246    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:36.562257    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:36.566495    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:52:36.566501    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:52:36.581005    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:52:36.581016    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:52:36.593102    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:52:36.593114    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:52:36.608979    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:52:36.608990    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:52:36.626404    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:36.626417    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:36.662945    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:52:36.662956    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:52:36.677370    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:52:36.677383    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:52:36.693073    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:52:36.693088    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:52:36.707045    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:36.707059    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:39.231163    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:44.233456    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:44.233622    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:44.246938    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:52:44.247014    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:44.258104    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:52:44.258169    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:44.268622    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:52:44.268699    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:44.279199    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:52:44.279272    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:44.293917    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:52:44.293989    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:44.304545    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:52:44.304618    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:44.315357    5393 logs.go:282] 0 containers: []
	W1209 16:52:44.315370    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:44.315439    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:44.326432    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:52:44.326452    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:52:44.326458    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:52:44.341130    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:52:44.341142    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:52:44.353639    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:44.353650    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:52:44.391171    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:44.391188    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:44.426886    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:52:44.426900    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:52:44.441709    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:52:44.441723    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:52:44.456670    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:52:44.456681    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:52:44.469112    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:52:44.469126    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:52:44.488199    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:44.488212    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:44.492613    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:52:44.492624    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:52:44.507687    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:52:44.507702    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:52:44.522123    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:52:44.522133    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:52:44.560109    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:52:44.560118    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:52:44.575415    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:52:44.575425    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:52:44.594622    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:52:44.594633    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:52:44.605947    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:44.605959    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:44.628483    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:52:44.628491    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:47.144018    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:52:52.146322    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:52:52.146455    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:52:52.158622    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:52:52.158711    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:52:52.169987    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:52:52.170065    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:52:52.181001    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:52:52.181081    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:52:52.191432    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:52:52.191513    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:52:52.203182    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:52:52.203254    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:52:52.213675    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:52:52.213750    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:52:52.223853    5393 logs.go:282] 0 containers: []
	W1209 16:52:52.223864    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:52:52.223933    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:52:52.234533    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:52:52.234552    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:52:52.234558    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:52:52.270062    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:52:52.270076    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:52:52.307106    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:52:52.307116    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:52:52.319411    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:52:52.319423    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:52:52.357609    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:52:52.357620    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:52:52.372407    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:52:52.372420    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:52:52.384489    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:52:52.384502    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:52:52.399717    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:52:52.399728    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:52:52.411491    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:52:52.411502    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:52:52.415603    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:52:52.415610    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:52:52.429785    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:52:52.429798    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:52:52.441656    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:52:52.441670    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:52:52.454089    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:52:52.454101    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:52:52.474581    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:52:52.474595    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:52:52.487506    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:52:52.487519    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:52:52.506134    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:52:52.506144    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:52:52.520822    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:52:52.520832    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:52:55.047182    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:00.049382    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:00.049650    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:00.071281    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:53:00.071416    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:00.093864    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:53:00.093965    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:00.113209    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:53:00.113307    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:00.137347    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:53:00.137431    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:00.156180    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:53:00.156285    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:00.168800    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:53:00.168885    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:00.179070    5393 logs.go:282] 0 containers: []
	W1209 16:53:00.179083    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:00.179145    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:00.195390    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:53:00.195409    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:00.195414    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:53:00.233313    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:00.233322    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:00.238071    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:53:00.238083    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:53:00.252385    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:53:00.252398    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:53:00.263990    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:53:00.264002    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:53:00.276713    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:53:00.276723    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:53:00.317440    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:53:00.317450    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:53:00.332253    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:53:00.332266    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:53:00.347616    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:00.347628    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:00.383740    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:53:00.383751    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:53:00.397743    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:53:00.397753    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:53:00.418782    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:53:00.418794    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:00.430646    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:53:00.430656    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:53:00.444683    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:53:00.444694    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:53:00.456509    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:53:00.456519    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:53:00.478921    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:53:00.478932    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:53:00.493179    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:00.493190    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:03.017662    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:08.019989    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:08.020195    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:08.035771    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:53:08.035871    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:08.047961    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:53:08.048037    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:08.062763    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:53:08.062838    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:08.073457    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:53:08.073539    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:08.084547    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:53:08.084621    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:08.095282    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:53:08.095348    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:08.105730    5393 logs.go:282] 0 containers: []
	W1209 16:53:08.105743    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:08.105804    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:08.116178    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:53:08.116195    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:08.116201    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:53:08.153533    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:53:08.153545    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:53:08.190380    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:53:08.190391    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:53:08.201335    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:53:08.201347    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:53:08.216274    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:08.216283    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:08.240021    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:53:08.240032    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:53:08.259990    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:08.260005    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:08.264516    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:08.264525    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:08.299541    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:53:08.299558    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:53:08.313392    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:53:08.313408    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:53:08.328271    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:53:08.328281    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:53:08.345550    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:53:08.345561    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:53:08.359606    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:53:08.359617    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:53:08.370979    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:53:08.370990    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:08.383496    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:53:08.383511    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:53:08.398739    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:53:08.398750    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:53:08.413926    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:53:08.413937    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:53:10.928049    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:15.930348    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:15.930640    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:15.959064    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:53:15.959178    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:15.975783    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:53:15.975867    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:15.989600    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:53:15.989681    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:16.001513    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:53:16.001592    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:16.013292    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:53:16.013367    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:16.024024    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:53:16.024100    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:16.034398    5393 logs.go:282] 0 containers: []
	W1209 16:53:16.034411    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:16.034474    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:16.045602    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:53:16.045620    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:16.045625    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:16.049832    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:16.049842    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:16.085421    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:53:16.085432    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:53:16.124207    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:53:16.124217    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:53:16.137860    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:53:16.137872    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:53:16.152076    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:53:16.152087    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:53:16.164095    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:53:16.164104    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:53:16.179449    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:53:16.179461    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:53:16.196812    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:53:16.196822    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:53:16.213448    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:53:16.213460    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:53:16.225076    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:16.225087    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:16.248344    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:53:16.248354    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:16.260349    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:16.260362    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:53:16.298221    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:53:16.298237    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:53:16.312047    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:53:16.312058    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:53:16.325853    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:53:16.325862    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:53:16.336661    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:53:16.336676    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:53:18.850099    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:23.850919    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:23.851175    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:23.875291    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:53:23.875436    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:23.892285    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:53:23.892384    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:23.905736    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:53:23.905813    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:23.917625    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:53:23.917709    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:23.928238    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:53:23.928312    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:23.938548    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:53:23.938620    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:23.948877    5393 logs.go:282] 0 containers: []
	W1209 16:53:23.948889    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:23.948954    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:23.959390    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:53:23.959409    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:53:23.959415    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:53:23.971377    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:53:23.971387    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:53:23.983821    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:53:23.983832    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:53:23.995682    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:23.995693    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:24.000157    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:24.000165    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:24.034894    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:53:24.034905    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:53:24.050400    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:53:24.050411    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:53:24.065365    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:53:24.065377    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:53:24.077281    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:24.077290    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:24.099958    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:53:24.099967    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:24.112431    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:53:24.112442    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:53:24.129803    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:53:24.129813    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:53:24.167672    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:53:24.167682    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:53:24.186856    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:53:24.186867    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:53:24.204731    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:53:24.204743    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:53:24.219726    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:24.219736    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:53:24.258196    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:53:24.258217    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:53:26.783055    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:31.787640    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:31.787988    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:53:31.816948    5393 logs.go:282] 2 containers: [e7e93ee22e8e 5c9cfb9c3cc2]
	I1209 16:53:31.817102    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:53:31.835266    5393 logs.go:282] 2 containers: [0e7115dae54e 040bb0a9f533]
	I1209 16:53:31.835371    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:53:31.849757    5393 logs.go:282] 1 containers: [f956b4079430]
	I1209 16:53:31.849866    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:53:31.864925    5393 logs.go:282] 2 containers: [7c2040eab1d7 54ad1b7454b7]
	I1209 16:53:31.865010    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:53:31.876970    5393 logs.go:282] 1 containers: [ea5d9f0beb08]
	I1209 16:53:31.877048    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:53:31.889632    5393 logs.go:282] 2 containers: [ca6ed422c2a0 6dca5a28bb4e]
	I1209 16:53:31.889735    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:53:31.900287    5393 logs.go:282] 0 containers: []
	W1209 16:53:31.900299    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:53:31.900361    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:53:31.919339    5393 logs.go:282] 2 containers: [3c6fefc5ddb8 21549367ef39]
	I1209 16:53:31.919356    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:53:31.919361    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:53:31.959625    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:53:31.959645    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:53:31.964091    5393 logs.go:123] Gathering logs for kube-controller-manager [6dca5a28bb4e] ...
	I1209 16:53:31.964097    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6dca5a28bb4e"
	I1209 16:53:31.978699    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:53:31.978708    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:53:32.001461    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:53:32.001471    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:53:32.014643    5393 logs.go:123] Gathering logs for kube-apiserver [e7e93ee22e8e] ...
	I1209 16:53:32.014652    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e7e93ee22e8e"
	I1209 16:53:32.028785    5393 logs.go:123] Gathering logs for kube-apiserver [5c9cfb9c3cc2] ...
	I1209 16:53:32.028794    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5c9cfb9c3cc2"
	I1209 16:53:32.066805    5393 logs.go:123] Gathering logs for coredns [f956b4079430] ...
	I1209 16:53:32.066818    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f956b4079430"
	I1209 16:53:32.078026    5393 logs.go:123] Gathering logs for kube-scheduler [7c2040eab1d7] ...
	I1209 16:53:32.078037    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c2040eab1d7"
	I1209 16:53:32.089457    5393 logs.go:123] Gathering logs for kube-scheduler [54ad1b7454b7] ...
	I1209 16:53:32.089469    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54ad1b7454b7"
	I1209 16:53:32.104067    5393 logs.go:123] Gathering logs for storage-provisioner [3c6fefc5ddb8] ...
	I1209 16:53:32.104077    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c6fefc5ddb8"
	I1209 16:53:32.115591    5393 logs.go:123] Gathering logs for storage-provisioner [21549367ef39] ...
	I1209 16:53:32.115603    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21549367ef39"
	I1209 16:53:32.126999    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:53:32.127010    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:53:32.161938    5393 logs.go:123] Gathering logs for etcd [0e7115dae54e] ...
	I1209 16:53:32.161949    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0e7115dae54e"
	I1209 16:53:32.176387    5393 logs.go:123] Gathering logs for etcd [040bb0a9f533] ...
	I1209 16:53:32.176398    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040bb0a9f533"
	I1209 16:53:32.191075    5393 logs.go:123] Gathering logs for kube-proxy [ea5d9f0beb08] ...
	I1209 16:53:32.191085    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea5d9f0beb08"
	I1209 16:53:32.202949    5393 logs.go:123] Gathering logs for kube-controller-manager [ca6ed422c2a0] ...
	I1209 16:53:32.202961    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca6ed422c2a0"
	I1209 16:53:34.723361    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:39.727113    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:39.727190    5393 kubeadm.go:597] duration metric: took 4m4.066397375s to restartPrimaryControlPlane
	W1209 16:53:39.727261    5393 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1209 16:53:39.727285    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1209 16:53:40.721340    5393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1209 16:53:40.726610    5393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1209 16:53:40.729520    5393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1209 16:53:40.732151    5393 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1209 16:53:40.732157    5393 kubeadm.go:157] found existing configuration files:
	
	I1209 16:53:40.732192    5393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/admin.conf
	I1209 16:53:40.735662    5393 kubeadm.go:163] "https://control-plane.minikube.internal:65214" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1209 16:53:40.735696    5393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1209 16:53:40.738790    5393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/kubelet.conf
	I1209 16:53:40.741384    5393 kubeadm.go:163] "https://control-plane.minikube.internal:65214" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1209 16:53:40.741416    5393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1209 16:53:40.744010    5393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/controller-manager.conf
	I1209 16:53:40.747191    5393 kubeadm.go:163] "https://control-plane.minikube.internal:65214" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1209 16:53:40.747220    5393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1209 16:53:40.750346    5393 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/scheduler.conf
	I1209 16:53:40.752675    5393 kubeadm.go:163] "https://control-plane.minikube.internal:65214" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:65214 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1209 16:53:40.752703    5393 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1209 16:53:40.755666    5393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1209 16:53:40.774216    5393 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1209 16:53:40.774276    5393 kubeadm.go:310] [preflight] Running pre-flight checks
	I1209 16:53:40.826320    5393 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1209 16:53:40.826381    5393 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1209 16:53:40.826426    5393 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1209 16:53:40.874317    5393 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1209 16:53:40.877396    5393 out.go:235]   - Generating certificates and keys ...
	I1209 16:53:40.877432    5393 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1209 16:53:40.877475    5393 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1209 16:53:40.877530    5393 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1209 16:53:40.877564    5393 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1209 16:53:40.877612    5393 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1209 16:53:40.877646    5393 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1209 16:53:40.877689    5393 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1209 16:53:40.877716    5393 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1209 16:53:40.881192    5393 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1209 16:53:40.881245    5393 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1209 16:53:40.881269    5393 kubeadm.go:310] [certs] Using the existing "sa" key
	I1209 16:53:40.881298    5393 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1209 16:53:40.940605    5393 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1209 16:53:41.097398    5393 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1209 16:53:41.188301    5393 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1209 16:53:41.314172    5393 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1209 16:53:41.348498    5393 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1209 16:53:41.348924    5393 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1209 16:53:41.348987    5393 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1209 16:53:41.428944    5393 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1209 16:53:41.431725    5393 out.go:235]   - Booting up control plane ...
	I1209 16:53:41.431766    5393 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1209 16:53:41.431798    5393 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1209 16:53:41.431836    5393 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1209 16:53:41.431878    5393 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1209 16:53:41.431962    5393 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1209 16:53:45.932820    5393 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502050 seconds
	I1209 16:53:45.932911    5393 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1209 16:53:45.937743    5393 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1209 16:53:46.459781    5393 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1209 16:53:46.460025    5393 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-632000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1209 16:53:46.966161    5393 kubeadm.go:310] [bootstrap-token] Using token: munmqi.ceqq9zvy2a0d2cid
	I1209 16:53:46.972650    5393 out.go:235]   - Configuring RBAC rules ...
	I1209 16:53:46.972737    5393 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1209 16:53:46.972810    5393 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1209 16:53:46.976539    5393 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1209 16:53:46.977802    5393 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1209 16:53:46.979429    5393 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1209 16:53:46.980711    5393 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1209 16:53:46.984462    5393 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1209 16:53:47.171701    5393 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1209 16:53:47.374210    5393 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1209 16:53:47.374393    5393 kubeadm.go:310] 
	I1209 16:53:47.374431    5393 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1209 16:53:47.374436    5393 kubeadm.go:310] 
	I1209 16:53:47.374538    5393 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1209 16:53:47.374548    5393 kubeadm.go:310] 
	I1209 16:53:47.374560    5393 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1209 16:53:47.374588    5393 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1209 16:53:47.374619    5393 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1209 16:53:47.374622    5393 kubeadm.go:310] 
	I1209 16:53:47.374649    5393 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1209 16:53:47.374652    5393 kubeadm.go:310] 
	I1209 16:53:47.374695    5393 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1209 16:53:47.374697    5393 kubeadm.go:310] 
	I1209 16:53:47.374726    5393 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1209 16:53:47.374762    5393 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1209 16:53:47.374838    5393 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1209 16:53:47.374840    5393 kubeadm.go:310] 
	I1209 16:53:47.374944    5393 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1209 16:53:47.374984    5393 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1209 16:53:47.374989    5393 kubeadm.go:310] 
	I1209 16:53:47.375036    5393 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token munmqi.ceqq9zvy2a0d2cid \
	I1209 16:53:47.375091    5393 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb7b4eec38a0897ce971e2bba2a6b79ec587773d147d857ca417d407ce72cb1f \
	I1209 16:53:47.375103    5393 kubeadm.go:310] 	--control-plane 
	I1209 16:53:47.375106    5393 kubeadm.go:310] 
	I1209 16:53:47.375175    5393 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1209 16:53:47.375195    5393 kubeadm.go:310] 
	I1209 16:53:47.375278    5393 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token munmqi.ceqq9zvy2a0d2cid \
	I1209 16:53:47.375343    5393 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:eb7b4eec38a0897ce971e2bba2a6b79ec587773d147d857ca417d407ce72cb1f 
	I1209 16:53:47.375393    5393 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1209 16:53:47.375495    5393 cni.go:84] Creating CNI manager for ""
	I1209 16:53:47.375504    5393 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:53:47.379318    5393 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1209 16:53:47.385381    5393 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1209 16:53:47.388530    5393 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1209 16:53:47.393235    5393 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1209 16:53:47.393304    5393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1209 16:53:47.393318    5393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-632000 minikube.k8s.io/updated_at=2024_12_09T16_53_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9 minikube.k8s.io/name=stopped-upgrade-632000 minikube.k8s.io/primary=true
	I1209 16:53:47.457577    5393 ops.go:34] apiserver oom_adj: -16
	I1209 16:53:47.457596    5393 kubeadm.go:1113] duration metric: took 64.328542ms to wait for elevateKubeSystemPrivileges
	I1209 16:53:47.457605    5393 kubeadm.go:394] duration metric: took 4m11.808330834s to StartCluster
	I1209 16:53:47.457613    5393 settings.go:142] acquiring lock: {Name:mk6085b49e250ce3863979186260a283889e4dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:53:47.457708    5393 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:53:47.458179    5393 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/kubeconfig: {Name:mk5092322010dd3bee2f23e3f2812067ca57270f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:53:47.458425    5393 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:53:47.458537    5393 config.go:182] Loaded profile config "stopped-upgrade-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:53:47.458485    5393 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1209 16:53:47.458568    5393 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-632000"
	I1209 16:53:47.458577    5393 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-632000"
	W1209 16:53:47.458581    5393 addons.go:243] addon storage-provisioner should already be in state true
	I1209 16:53:47.458593    5393 host.go:66] Checking if "stopped-upgrade-632000" exists ...
	I1209 16:53:47.458599    5393 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-632000"
	I1209 16:53:47.458609    5393 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-632000"
	I1209 16:53:47.462306    5393 out.go:177] * Verifying Kubernetes components...
	I1209 16:53:47.463007    5393 kapi.go:59] client config for stopped-upgrade-632000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/stopped-upgrade-632000/client.key", CAFile:"/Users/jenkins/minikube-integration/20062-1231/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1066cf740), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1209 16:53:47.465579    5393 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-632000"
	W1209 16:53:47.465584    5393 addons.go:243] addon default-storageclass should already be in state true
	I1209 16:53:47.465592    5393 host.go:66] Checking if "stopped-upgrade-632000" exists ...
	I1209 16:53:47.466111    5393 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1209 16:53:47.466116    5393 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1209 16:53:47.466121    5393 sshutil.go:53] new ssh client: &{IP:localhost Port:65179 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/id_rsa Username:docker}
	I1209 16:53:47.466389    5393 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1209 16:53:47.470266    5393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1209 16:53:47.473292    5393 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 16:53:47.473297    5393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1209 16:53:47.473302    5393 sshutil.go:53] new ssh client: &{IP:localhost Port:65179 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/stopped-upgrade-632000/id_rsa Username:docker}
	I1209 16:53:47.562899    5393 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1209 16:53:47.567754    5393 api_server.go:52] waiting for apiserver process to appear ...
	I1209 16:53:47.567807    5393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1209 16:53:47.571704    5393 api_server.go:72] duration metric: took 113.252584ms to wait for apiserver process to appear ...
	I1209 16:53:47.571714    5393 api_server.go:88] waiting for apiserver healthz status ...
	I1209 16:53:47.571721    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:47.578653    5393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1209 16:53:47.640700    5393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1209 16:53:47.935849    5393 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1209 16:53:47.935877    5393 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1209 16:53:52.574027    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:52.574077    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:53:57.574851    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:53:57.574900    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:02.575454    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:02.575492    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:07.576091    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:07.576119    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:12.576700    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:12.576733    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:17.577733    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:17.577752    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1209 16:54:17.938698    5393 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1209 16:54:17.944026    5393 out.go:177] * Enabled addons: storage-provisioner
	I1209 16:54:17.950885    5393 addons.go:510] duration metric: took 30.490595458s for enable addons: enabled=[storage-provisioner]
	I1209 16:54:22.578545    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:22.578566    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:27.579566    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:27.579625    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:32.580937    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:32.580962    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:37.581423    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:37.581445    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:42.583126    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:42.583149    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:47.585307    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:47.585420    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:54:47.599051    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:54:47.599131    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:54:47.609994    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:54:47.610069    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:54:47.620478    5393 logs.go:282] 2 containers: [faf77a083e48 4f5e4dbc17b9]
	I1209 16:54:47.620559    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:54:47.631586    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:54:47.631659    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:54:47.641874    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:54:47.641972    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:54:47.652379    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:54:47.652461    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:54:47.668284    5393 logs.go:282] 0 containers: []
	W1209 16:54:47.668296    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:54:47.668364    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:54:47.678817    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:54:47.678831    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:54:47.678837    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:54:47.690923    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:54:47.690934    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:54:47.695465    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:54:47.695474    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:54:47.735665    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:54:47.735676    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:54:47.747373    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:54:47.747384    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:54:47.764712    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:54:47.764723    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:54:47.776366    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:54:47.776377    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:54:47.802483    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:54:47.802495    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:54:47.841536    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:54:47.841549    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:54:47.855841    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:54:47.855851    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:54:47.871176    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:54:47.871187    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:54:47.883060    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:54:47.883072    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:54:47.894216    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:54:47.894230    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:54:50.411173    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:54:55.411841    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:54:55.412034    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:54:55.428838    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:54:55.428934    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:54:55.441954    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:54:55.442043    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:54:55.453223    5393 logs.go:282] 2 containers: [faf77a083e48 4f5e4dbc17b9]
	I1209 16:54:55.453296    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:54:55.463614    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:54:55.463697    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:54:55.474195    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:54:55.474277    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:54:55.484673    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:54:55.484748    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:54:55.495184    5393 logs.go:282] 0 containers: []
	W1209 16:54:55.495195    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:54:55.495267    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:54:55.505656    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:54:55.505672    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:54:55.505678    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:54:55.509826    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:54:55.509833    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:54:55.524187    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:54:55.524199    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:54:55.535797    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:54:55.535810    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:54:55.549783    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:54:55.549800    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:54:55.563924    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:54:55.563935    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:54:55.579466    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:54:55.579475    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:54:55.618754    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:54:55.618772    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:54:55.660529    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:54:55.660540    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:54:55.680502    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:54:55.680517    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:54:55.698615    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:54:55.698627    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:54:55.718685    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:54:55.718702    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:54:55.733645    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:54:55.733654    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:54:58.260728    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:55:03.263502    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:55:03.263957    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:55:03.293952    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:55:03.294088    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:55:03.312955    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:55:03.313038    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:55:03.327563    5393 logs.go:282] 2 containers: [faf77a083e48 4f5e4dbc17b9]
	I1209 16:55:03.327650    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:55:03.340382    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:55:03.340466    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:55:03.358314    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:55:03.358424    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:55:03.370432    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:55:03.370532    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:55:03.382492    5393 logs.go:282] 0 containers: []
	W1209 16:55:03.382505    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:55:03.382586    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:55:03.398976    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:55:03.398993    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:55:03.398999    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:55:03.414955    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:55:03.414971    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:55:03.430971    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:55:03.430985    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:55:03.450813    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:55:03.450827    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:55:03.468451    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:55:03.468464    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:55:03.506929    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:55:03.506947    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:55:03.511831    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:55:03.511843    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:55:03.550295    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:55:03.550307    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:55:03.562617    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:55:03.562631    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:55:03.574258    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:55:03.574272    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:55:03.588857    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:55:03.588868    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:55:03.606791    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:55:03.606802    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:55:03.630722    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:55:03.630731    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:55:06.149339    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:55:11.151649    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:55:11.152208    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:55:11.190297    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:55:11.190451    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:55:11.211733    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:55:11.211857    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:55:11.227888    5393 logs.go:282] 2 containers: [faf77a083e48 4f5e4dbc17b9]
	I1209 16:55:11.227975    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:55:11.240465    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:55:11.240541    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:55:11.253807    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:55:11.253881    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:55:11.264738    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:55:11.264812    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:55:11.275406    5393 logs.go:282] 0 containers: []
	W1209 16:55:11.275417    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:55:11.275487    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:55:11.286015    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:55:11.286030    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:55:11.286036    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:55:11.298311    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:55:11.298323    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:55:11.317586    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:55:11.317597    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:55:11.355449    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:55:11.355459    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:55:11.360116    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:55:11.360122    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:55:11.374370    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:55:11.374381    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:55:11.389711    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:55:11.389723    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:55:11.401458    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:55:11.401472    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:55:11.415979    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:55:11.415990    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:55:11.439269    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:55:11.439278    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:55:11.475516    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:55:11.475530    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:55:11.488810    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:55:11.488821    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:55:11.500602    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:55:11.500615    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:55:14.013976    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:55:19.016334    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:55:19.016764    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:55:19.047565    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:55:19.047702    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:55:19.067160    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:55:19.067266    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:55:19.081927    5393 logs.go:282] 2 containers: [faf77a083e48 4f5e4dbc17b9]
	I1209 16:55:19.082005    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:55:19.094156    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:55:19.094231    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:55:19.105071    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:55:19.105142    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:55:19.115662    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:55:19.115733    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:55:19.126154    5393 logs.go:282] 0 containers: []
	W1209 16:55:19.126170    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:55:19.126231    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:55:19.136751    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:55:19.136769    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:55:19.136776    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:55:19.148012    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:55:19.148024    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:55:19.160097    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:55:19.160109    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:55:19.183661    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:55:19.183673    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:55:19.207469    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:55:19.207476    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:55:19.243591    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:55:19.243598    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:55:19.247604    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:55:19.247609    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:55:19.265002    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:55:19.265015    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:55:19.276745    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:55:19.276758    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:55:19.287748    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:55:19.287759    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:55:19.323237    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:55:19.323249    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:55:19.343395    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:55:19.343408    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:55:19.358314    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:55:19.358325    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:55:21.871619    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:55:26.872703    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:55:26.872819    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:55:26.882948    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:55:26.883018    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:55:26.893692    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:55:26.893767    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:55:26.903903    5393 logs.go:282] 2 containers: [faf77a083e48 4f5e4dbc17b9]
	I1209 16:55:26.903979    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:55:26.913715    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:55:26.913788    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:55:26.924429    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:55:26.924503    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:55:26.935736    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:55:26.935809    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:55:26.946448    5393 logs.go:282] 0 containers: []
	W1209 16:55:26.946464    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:55:26.946523    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:55:26.957635    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:55:26.957653    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:55:26.957660    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:55:26.971729    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:55:26.971741    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:55:26.988910    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:55:26.988924    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:55:27.013894    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:55:27.013901    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:55:27.026075    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:55:27.026088    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:55:27.060031    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:55:27.060042    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:55:27.074781    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:55:27.074793    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:55:27.088977    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:55:27.088990    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:55:27.100653    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:55:27.100665    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:55:27.113729    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:55:27.113741    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:55:27.151635    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:55:27.151642    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:55:27.155680    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:55:27.155687    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:55:27.167200    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:55:27.167212    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:55:29.680039    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:55:34.682522    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:55:34.683060    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:55:34.723395    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:55:34.723543    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:55:34.746168    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:55:34.746293    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:55:34.761533    5393 logs.go:282] 2 containers: [faf77a083e48 4f5e4dbc17b9]
	I1209 16:55:34.761620    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:55:34.773857    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:55:34.773937    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:55:34.784837    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:55:34.784916    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:55:34.795402    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:55:34.795482    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:55:34.810019    5393 logs.go:282] 0 containers: []
	W1209 16:55:34.810031    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:55:34.810102    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:55:34.821048    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:55:34.821062    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:55:34.821069    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:55:34.865070    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:55:34.865083    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:55:34.879821    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:55:34.879835    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:55:34.894067    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:55:34.894078    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:55:34.906235    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:55:34.906246    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:55:34.921557    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:55:34.921568    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:55:34.936138    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:55:34.936150    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:55:34.974553    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:55:34.974563    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:55:34.979060    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:55:34.979068    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:55:34.996520    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:55:34.996533    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:55:35.020693    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:55:35.020702    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:55:35.032588    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:55:35.032600    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:55:35.045214    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:55:35.045225    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:55:37.587878    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:55:42.589689    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:55:42.589790    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:55:42.602264    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:55:42.602334    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:55:42.612470    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:55:42.612546    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:55:42.623447    5393 logs.go:282] 2 containers: [faf77a083e48 4f5e4dbc17b9]
	I1209 16:55:42.623524    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:55:42.637837    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:55:42.637911    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:55:42.648392    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:55:42.648464    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:55:42.658731    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:55:42.658800    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:55:42.668897    5393 logs.go:282] 0 containers: []
	W1209 16:55:42.668907    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:55:42.668964    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:55:42.679096    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:55:42.679110    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:55:42.679116    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:55:42.690738    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:55:42.690751    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:55:42.729135    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:55:42.729143    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:55:42.733298    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:55:42.733304    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:55:42.749966    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:55:42.749978    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:55:42.764701    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:55:42.764712    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:55:42.781747    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:55:42.781760    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:55:42.793083    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:55:42.793094    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:55:42.815993    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:55:42.816001    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:55:42.853042    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:55:42.853054    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:55:42.866738    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:55:42.866748    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:55:42.880211    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:55:42.880222    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:55:42.891766    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:55:42.891780    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:55:45.406938    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:55:50.409439    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:55:50.409914    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:55:50.443640    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:55:50.443794    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:55:50.464200    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:55:50.464304    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:55:50.482781    5393 logs.go:282] 2 containers: [faf77a083e48 4f5e4dbc17b9]
	I1209 16:55:50.482866    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:55:50.493904    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:55:50.493975    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:55:50.504854    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:55:50.504933    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:55:50.515550    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:55:50.515622    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:55:50.525576    5393 logs.go:282] 0 containers: []
	W1209 16:55:50.525587    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:55:50.525652    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:55:50.543872    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:55:50.543887    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:55:50.543893    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:55:50.561785    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:55:50.561795    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:55:50.573173    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:55:50.573187    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:55:50.584427    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:55:50.584439    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:55:50.622282    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:55:50.622289    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:55:50.634369    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:55:50.634380    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:55:50.648614    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:55:50.648624    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:55:50.662769    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:55:50.662779    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:55:50.674068    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:55:50.674079    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:55:50.688317    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:55:50.688326    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:55:50.710679    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:55:50.710691    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:55:50.735785    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:55:50.735795    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:55:50.740372    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:55:50.740382    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:55:53.276955    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:55:58.279515    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:55:58.279853    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:55:58.309298    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:55:58.309426    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:55:58.328520    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:55:58.328602    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:55:58.343615    5393 logs.go:282] 2 containers: [faf77a083e48 4f5e4dbc17b9]
	I1209 16:55:58.343681    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:55:58.354895    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:55:58.354954    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:55:58.365178    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:55:58.365251    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:55:58.375852    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:55:58.375928    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:55:58.386920    5393 logs.go:282] 0 containers: []
	W1209 16:55:58.386932    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:55:58.386992    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:55:58.397969    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:55:58.397986    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:55:58.397992    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:55:58.436020    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:55:58.436028    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:55:58.450644    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:55:58.450655    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:55:58.464825    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:55:58.464837    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:55:58.482306    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:55:58.482318    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:55:58.494644    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:55:58.494656    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:55:58.505729    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:55:58.505741    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:55:58.528796    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:55:58.528803    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:55:58.532818    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:55:58.532827    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:55:58.567968    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:55:58.567980    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:55:58.582489    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:55:58.582504    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:55:58.618617    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:55:58.618631    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:55:58.653792    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:55:58.653806    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:56:01.189083    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:56:06.191691    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:56:06.192244    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:56:06.232021    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:56:06.232172    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:56:06.253852    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:56:06.253970    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:56:06.269286    5393 logs.go:282] 4 containers: [3a65e138305c 4ef5f9d1555b faf77a083e48 4f5e4dbc17b9]
	I1209 16:56:06.269379    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:56:06.282358    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:56:06.282436    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:56:06.293789    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:56:06.293866    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:56:06.305473    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:56:06.305547    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:56:06.316182    5393 logs.go:282] 0 containers: []
	W1209 16:56:06.316194    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:56:06.316256    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:56:06.327369    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:56:06.327390    5393 logs.go:123] Gathering logs for coredns [3a65e138305c] ...
	I1209 16:56:06.327397    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65e138305c"
	I1209 16:56:06.341040    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:56:06.341052    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:56:06.356785    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:56:06.356798    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:56:06.377947    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:56:06.377960    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:56:06.389904    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:56:06.389918    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:56:06.395954    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:56:06.395965    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:56:06.409504    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:56:06.409518    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:56:06.447216    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:56:06.447226    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:56:06.463653    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:56:06.463666    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:56:06.487550    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:56:06.487560    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:56:06.499905    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:56:06.499915    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:56:06.535864    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:56:06.535875    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:56:06.551334    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:56:06.551344    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:56:06.570097    5393 logs.go:123] Gathering logs for coredns [4ef5f9d1555b] ...
	I1209 16:56:06.570108    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ef5f9d1555b"
	I1209 16:56:06.582444    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:56:06.582455    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:56:09.097090    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:56:14.099936    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:56:14.100470    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:56:14.145708    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:56:14.145847    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:56:14.166215    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:56:14.166322    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:56:14.181838    5393 logs.go:282] 4 containers: [3a65e138305c 4ef5f9d1555b faf77a083e48 4f5e4dbc17b9]
	I1209 16:56:14.181929    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:56:14.194154    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:56:14.194236    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:56:14.205523    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:56:14.205598    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:56:14.216636    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:56:14.216713    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:56:14.227831    5393 logs.go:282] 0 containers: []
	W1209 16:56:14.227844    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:56:14.227902    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:56:14.239062    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:56:14.239080    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:56:14.239086    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:56:14.275932    5393 logs.go:123] Gathering logs for coredns [3a65e138305c] ...
	I1209 16:56:14.275945    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65e138305c"
	I1209 16:56:14.288233    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:56:14.288246    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:56:14.300676    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:56:14.300689    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:56:14.338274    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:56:14.338284    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:56:14.342315    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:56:14.342321    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:56:14.365234    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:56:14.365243    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:56:14.376794    5393 logs.go:123] Gathering logs for coredns [4ef5f9d1555b] ...
	I1209 16:56:14.376805    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ef5f9d1555b"
	I1209 16:56:14.388718    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:56:14.388728    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:56:14.400330    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:56:14.400340    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:56:14.418789    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:56:14.418800    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:56:14.434592    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:56:14.434610    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:56:14.451526    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:56:14.451543    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:56:14.469030    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:56:14.469045    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:56:14.485098    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:56:14.485111    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:56:17.000139    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:56:22.002447    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:56:22.002943    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:56:22.044470    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:56:22.044613    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:56:22.066341    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:56:22.066462    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:56:22.083234    5393 logs.go:282] 4 containers: [3a65e138305c 4ef5f9d1555b faf77a083e48 4f5e4dbc17b9]
	I1209 16:56:22.083317    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:56:22.096533    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:56:22.096612    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:56:22.108117    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:56:22.108195    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:56:22.119337    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:56:22.119412    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:56:22.130824    5393 logs.go:282] 0 containers: []
	W1209 16:56:22.130836    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:56:22.130906    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:56:22.142342    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:56:22.142361    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:56:22.142366    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:56:22.177833    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:56:22.177848    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:56:22.189930    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:56:22.189944    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:56:22.210432    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:56:22.210444    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:56:22.235837    5393 logs.go:123] Gathering logs for coredns [4ef5f9d1555b] ...
	I1209 16:56:22.235845    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ef5f9d1555b"
	I1209 16:56:22.248806    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:56:22.248819    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:56:22.261765    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:56:22.261778    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:56:22.279412    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:56:22.279421    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:56:22.316938    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:56:22.316948    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:56:22.321017    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:56:22.321027    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:56:22.335425    5393 logs.go:123] Gathering logs for coredns [3a65e138305c] ...
	I1209 16:56:22.335437    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65e138305c"
	I1209 16:56:22.353977    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:56:22.353991    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:56:22.366735    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:56:22.366745    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:56:22.379056    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:56:22.379069    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:56:22.394016    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:56:22.394028    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:56:24.914339    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:56:29.915815    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:56:29.915907    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:56:29.929492    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:56:29.929552    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:56:29.940960    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:56:29.941028    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:56:29.953293    5393 logs.go:282] 4 containers: [3a65e138305c 4ef5f9d1555b faf77a083e48 4f5e4dbc17b9]
	I1209 16:56:29.953372    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:56:29.966163    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:56:29.966228    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:56:29.978131    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:56:29.978196    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:56:29.989617    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:56:29.989699    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:56:30.001749    5393 logs.go:282] 0 containers: []
	W1209 16:56:30.001760    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:56:30.001812    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:56:30.014947    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:56:30.014966    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:56:30.014972    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:56:30.055232    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:56:30.055260    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:56:30.060362    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:56:30.060375    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:56:30.105229    5393 logs.go:123] Gathering logs for coredns [3a65e138305c] ...
	I1209 16:56:30.105242    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65e138305c"
	I1209 16:56:30.121690    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:56:30.121700    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:56:30.135645    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:56:30.135657    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:56:30.148918    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:56:30.148931    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:56:30.164187    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:56:30.164201    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:56:30.179583    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:56:30.179597    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:56:30.195579    5393 logs.go:123] Gathering logs for coredns [4ef5f9d1555b] ...
	I1209 16:56:30.195595    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ef5f9d1555b"
	I1209 16:56:30.208848    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:56:30.208860    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:56:30.228183    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:56:30.228194    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:56:30.247490    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:56:30.247502    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:56:30.262390    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:56:30.262405    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:56:30.289356    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:56:30.289366    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:56:32.804228    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:56:37.807022    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:56:37.807469    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:56:37.842446    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:56:37.842609    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:56:37.863407    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:56:37.863503    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:56:37.877327    5393 logs.go:282] 4 containers: [3a65e138305c 4ef5f9d1555b faf77a083e48 4f5e4dbc17b9]
	I1209 16:56:37.877416    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:56:37.889733    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:56:37.889803    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:56:37.900328    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:56:37.900404    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:56:37.910874    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:56:37.910939    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:56:37.920295    5393 logs.go:282] 0 containers: []
	W1209 16:56:37.920307    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:56:37.920366    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:56:37.930049    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:56:37.930066    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:56:37.930071    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:56:37.934209    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:56:37.934218    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:56:37.969681    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:56:37.969690    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:56:38.005966    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:56:38.005977    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:56:38.019839    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:56:38.019849    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:56:38.031775    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:56:38.031786    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:56:38.043439    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:56:38.043451    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:56:38.054680    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:56:38.054693    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:56:38.066008    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:56:38.066020    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:56:38.086037    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:56:38.086047    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:56:38.109450    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:56:38.109458    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:56:38.123399    5393 logs.go:123] Gathering logs for coredns [3a65e138305c] ...
	I1209 16:56:38.123408    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65e138305c"
	I1209 16:56:38.134694    5393 logs.go:123] Gathering logs for coredns [4ef5f9d1555b] ...
	I1209 16:56:38.134705    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ef5f9d1555b"
	I1209 16:56:38.146085    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:56:38.146098    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:56:38.160512    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:56:38.160522    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:56:40.674822    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:56:45.676222    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:56:45.676406    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:56:45.706916    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:56:45.707002    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:56:45.724036    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:56:45.724114    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:56:45.735688    5393 logs.go:282] 4 containers: [3a65e138305c 4ef5f9d1555b faf77a083e48 4f5e4dbc17b9]
	I1209 16:56:45.735765    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:56:45.751165    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:56:45.751243    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:56:45.762853    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:56:45.762934    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:56:45.772916    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:56:45.772990    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:56:45.783127    5393 logs.go:282] 0 containers: []
	W1209 16:56:45.783139    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:56:45.783201    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:56:45.793690    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:56:45.793705    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:56:45.793711    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:56:45.798245    5393 logs.go:123] Gathering logs for coredns [3a65e138305c] ...
	I1209 16:56:45.798255    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65e138305c"
	I1209 16:56:45.809526    5393 logs.go:123] Gathering logs for coredns [4ef5f9d1555b] ...
	I1209 16:56:45.809541    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ef5f9d1555b"
	I1209 16:56:45.820999    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:56:45.821010    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:56:45.833342    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:56:45.833351    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:56:45.851393    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:56:45.851402    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:56:45.877113    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:56:45.877121    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:56:45.914477    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:56:45.914486    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:56:45.948788    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:56:45.948803    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:56:45.963755    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:56:45.963767    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:56:45.975772    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:56:45.975785    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:56:46.001180    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:56:46.001190    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:56:46.013092    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:56:46.013106    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:56:46.027948    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:56:46.027960    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:56:46.040153    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:56:46.040164    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:56:48.552621    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:56:53.555548    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:56:53.555977    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:56:53.587628    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:56:53.587765    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:56:53.607532    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:56:53.607628    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:56:53.621990    5393 logs.go:282] 4 containers: [3a65e138305c 4ef5f9d1555b faf77a083e48 4f5e4dbc17b9]
	I1209 16:56:53.622068    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:56:53.633808    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:56:53.633883    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:56:53.644843    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:56:53.644924    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:56:53.655550    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:56:53.655615    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:56:53.665711    5393 logs.go:282] 0 containers: []
	W1209 16:56:53.665722    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:56:53.665774    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:56:53.676202    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:56:53.676221    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:56:53.676227    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:56:53.712905    5393 logs.go:123] Gathering logs for coredns [3a65e138305c] ...
	I1209 16:56:53.712914    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65e138305c"
	I1209 16:56:53.725194    5393 logs.go:123] Gathering logs for coredns [4ef5f9d1555b] ...
	I1209 16:56:53.725206    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ef5f9d1555b"
	I1209 16:56:53.737033    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:56:53.737047    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:56:53.761764    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:56:53.761774    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:56:53.801526    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:56:53.801539    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:56:53.815727    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:56:53.815740    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:56:53.831501    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:56:53.831512    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:56:53.836316    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:56:53.836325    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:56:53.855265    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:56:53.855276    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:56:53.867190    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:56:53.867204    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:56:53.878254    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:56:53.878264    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:56:53.896896    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:56:53.896907    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:56:53.908045    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:56:53.908058    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:56:53.919311    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:56:53.919324    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:56:56.433285    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:57:01.435595    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:57:01.436083    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:57:01.471335    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:57:01.471482    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:57:01.490713    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:57:01.490823    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:57:01.505116    5393 logs.go:282] 4 containers: [3a65e138305c 4ef5f9d1555b faf77a083e48 4f5e4dbc17b9]
	I1209 16:57:01.505191    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:57:01.517832    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:57:01.517913    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:57:01.528281    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:57:01.528360    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:57:01.538748    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:57:01.538829    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:57:01.549213    5393 logs.go:282] 0 containers: []
	W1209 16:57:01.549223    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:57:01.549285    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:57:01.559602    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:57:01.559621    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:57:01.559627    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:57:01.595454    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:57:01.595465    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:57:01.630263    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:57:01.630273    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:57:01.644974    5393 logs.go:123] Gathering logs for coredns [3a65e138305c] ...
	I1209 16:57:01.644986    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65e138305c"
	I1209 16:57:01.657388    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:57:01.657403    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:57:01.671912    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:57:01.671925    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:57:01.684029    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:57:01.684041    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:57:01.688761    5393 logs.go:123] Gathering logs for coredns [4ef5f9d1555b] ...
	I1209 16:57:01.688769    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ef5f9d1555b"
	I1209 16:57:01.701222    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:57:01.701234    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:57:01.713433    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:57:01.713447    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:57:01.725129    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:57:01.725141    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:57:01.742641    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:57:01.742651    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:57:01.766386    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:57:01.766394    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:57:01.780197    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:57:01.780207    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:57:01.792701    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:57:01.792711    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:57:04.306728    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:57:09.308535    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:57:09.308633    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:57:09.321252    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:57:09.321331    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:57:09.335819    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:57:09.335893    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:57:09.346839    5393 logs.go:282] 4 containers: [3a65e138305c 4ef5f9d1555b faf77a083e48 4f5e4dbc17b9]
	I1209 16:57:09.346913    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:57:09.356797    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:57:09.356873    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:57:09.367460    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:57:09.367534    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:57:09.377628    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:57:09.377703    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:57:09.387489    5393 logs.go:282] 0 containers: []
	W1209 16:57:09.387500    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:57:09.387564    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:57:09.397426    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:57:09.397446    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:57:09.397452    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:57:09.414224    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:57:09.414236    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:57:09.437909    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:57:09.437917    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:57:09.473655    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:57:09.473664    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:57:09.509719    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:57:09.509732    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:57:09.527854    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:57:09.527866    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:57:09.539439    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:57:09.539451    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:57:09.550884    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:57:09.550897    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:57:09.564408    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:57:09.564419    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:57:09.580515    5393 logs.go:123] Gathering logs for coredns [3a65e138305c] ...
	I1209 16:57:09.580529    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65e138305c"
	I1209 16:57:09.591943    5393 logs.go:123] Gathering logs for coredns [4ef5f9d1555b] ...
	I1209 16:57:09.591955    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ef5f9d1555b"
	I1209 16:57:09.606886    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:57:09.606898    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:57:09.617745    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:57:09.617756    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:57:09.623218    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:57:09.623227    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:57:09.634964    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:57:09.634977    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:57:12.150459    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:57:17.153288    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:57:17.153883    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:57:17.195386    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:57:17.195535    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:57:17.217303    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:57:17.217434    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:57:17.242231    5393 logs.go:282] 4 containers: [3a65e138305c 4ef5f9d1555b faf77a083e48 4f5e4dbc17b9]
	I1209 16:57:17.242306    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:57:17.253944    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:57:17.254022    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:57:17.267066    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:57:17.267142    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:57:17.280751    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:57:17.280819    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:57:17.291169    5393 logs.go:282] 0 containers: []
	W1209 16:57:17.291178    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:57:17.291237    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:57:17.301498    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:57:17.301516    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:57:17.301522    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:57:17.315759    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:57:17.315771    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:57:17.339614    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:57:17.339624    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:57:17.351252    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:57:17.351263    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:57:17.388070    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:57:17.388085    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:57:17.402523    5393 logs.go:123] Gathering logs for coredns [4ef5f9d1555b] ...
	I1209 16:57:17.402536    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ef5f9d1555b"
	I1209 16:57:17.414294    5393 logs.go:123] Gathering logs for coredns [3a65e138305c] ...
	I1209 16:57:17.414308    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65e138305c"
	I1209 16:57:17.425563    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:57:17.425573    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:57:17.437080    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:57:17.437096    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:57:17.448708    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:57:17.448723    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:57:17.466677    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:57:17.466691    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:57:17.505191    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:57:17.505200    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:57:17.509480    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:57:17.509487    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:57:17.523275    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:57:17.523288    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:57:17.535086    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:57:17.535097    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:57:20.048527    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:57:25.050785    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:57:25.051275    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:57:25.083622    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:57:25.083764    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:57:25.102396    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:57:25.102499    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:57:25.116988    5393 logs.go:282] 4 containers: [3a65e138305c 4ef5f9d1555b faf77a083e48 4f5e4dbc17b9]
	I1209 16:57:25.117072    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:57:25.129033    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:57:25.129107    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:57:25.139921    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:57:25.139997    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:57:25.150555    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:57:25.150631    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:57:25.160764    5393 logs.go:282] 0 containers: []
	W1209 16:57:25.160776    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:57:25.160841    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:57:25.171998    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:57:25.172014    5393 logs.go:123] Gathering logs for coredns [3a65e138305c] ...
	I1209 16:57:25.172021    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65e138305c"
	I1209 16:57:25.183781    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:57:25.183794    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:57:25.199925    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:57:25.199939    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:57:25.244946    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:57:25.244961    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:57:25.260738    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:57:25.260748    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:57:25.277056    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:57:25.277071    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:57:25.291633    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:57:25.291647    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:57:25.303142    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:57:25.303156    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:57:25.320314    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:57:25.320324    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:57:25.344768    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:57:25.344779    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:57:25.349126    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:57:25.349134    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:57:25.364689    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:57:25.364698    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:57:25.376627    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:57:25.376641    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:57:25.388757    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:57:25.388770    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:57:25.422861    5393 logs.go:123] Gathering logs for coredns [4ef5f9d1555b] ...
	I1209 16:57:25.422875    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ef5f9d1555b"
	I1209 16:57:27.936680    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:57:32.938701    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:57:32.938774    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:57:32.952315    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:57:32.952380    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:57:32.966758    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:57:32.966827    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:57:32.979070    5393 logs.go:282] 4 containers: [3a65e138305c 4ef5f9d1555b faf77a083e48 4f5e4dbc17b9]
	I1209 16:57:32.979140    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:57:32.990854    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:57:32.990932    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:57:33.002518    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:57:33.002582    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:57:33.013761    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:57:33.013827    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:57:33.024980    5393 logs.go:282] 0 containers: []
	W1209 16:57:33.024995    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:57:33.025060    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:57:33.036784    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:57:33.036799    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:57:33.036806    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:57:33.050570    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:57:33.050581    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:57:33.062847    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:57:33.062856    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:57:33.077526    5393 logs.go:123] Gathering logs for coredns [4ef5f9d1555b] ...
	I1209 16:57:33.077538    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ef5f9d1555b"
	I1209 16:57:33.092248    5393 logs.go:123] Gathering logs for coredns [3a65e138305c] ...
	I1209 16:57:33.092259    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65e138305c"
	I1209 16:57:33.106703    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:57:33.106712    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:57:33.124953    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:57:33.124967    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:57:33.138699    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:57:33.138713    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:57:33.176528    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:57:33.176540    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:57:33.194440    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:57:33.194453    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:57:33.208973    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:57:33.208983    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:57:33.226298    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:57:33.226311    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:57:33.251363    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:57:33.251380    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:57:33.264038    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:57:33.264050    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:57:33.301819    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:57:33.301829    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:57:35.808309    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:57:40.811045    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:57:40.811600    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1209 16:57:40.854716    5393 logs.go:282] 1 containers: [14eb16f0c121]
	I1209 16:57:40.854861    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1209 16:57:40.875474    5393 logs.go:282] 1 containers: [9ac448ce1874]
	I1209 16:57:40.875589    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1209 16:57:40.890312    5393 logs.go:282] 4 containers: [3a65e138305c 4ef5f9d1555b faf77a083e48 4f5e4dbc17b9]
	I1209 16:57:40.890386    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1209 16:57:40.902940    5393 logs.go:282] 1 containers: [cf60f9c53178]
	I1209 16:57:40.903018    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1209 16:57:40.913497    5393 logs.go:282] 1 containers: [f638e9997f33]
	I1209 16:57:40.913573    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1209 16:57:40.924296    5393 logs.go:282] 1 containers: [94e659184851]
	I1209 16:57:40.924360    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1209 16:57:40.942401    5393 logs.go:282] 0 containers: []
	W1209 16:57:40.942413    5393 logs.go:284] No container was found matching "kindnet"
	I1209 16:57:40.942484    5393 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1209 16:57:40.953207    5393 logs.go:282] 1 containers: [c4cacf010c32]
	I1209 16:57:40.953226    5393 logs.go:123] Gathering logs for dmesg ...
	I1209 16:57:40.953232    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1209 16:57:40.957635    5393 logs.go:123] Gathering logs for describe nodes ...
	I1209 16:57:40.957646    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1209 16:57:40.997236    5393 logs.go:123] Gathering logs for kube-scheduler [cf60f9c53178] ...
	I1209 16:57:40.997247    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf60f9c53178"
	I1209 16:57:41.011950    5393 logs.go:123] Gathering logs for storage-provisioner [c4cacf010c32] ...
	I1209 16:57:41.011963    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c4cacf010c32"
	I1209 16:57:41.024577    5393 logs.go:123] Gathering logs for coredns [3a65e138305c] ...
	I1209 16:57:41.024588    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a65e138305c"
	I1209 16:57:41.035738    5393 logs.go:123] Gathering logs for coredns [4f5e4dbc17b9] ...
	I1209 16:57:41.035750    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f5e4dbc17b9"
	I1209 16:57:41.047509    5393 logs.go:123] Gathering logs for kube-controller-manager [94e659184851] ...
	I1209 16:57:41.047522    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94e659184851"
	I1209 16:57:41.065790    5393 logs.go:123] Gathering logs for container status ...
	I1209 16:57:41.065800    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1209 16:57:41.078126    5393 logs.go:123] Gathering logs for kube-apiserver [14eb16f0c121] ...
	I1209 16:57:41.078143    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 14eb16f0c121"
	I1209 16:57:41.094608    5393 logs.go:123] Gathering logs for etcd [9ac448ce1874] ...
	I1209 16:57:41.094628    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9ac448ce1874"
	I1209 16:57:41.111291    5393 logs.go:123] Gathering logs for coredns [faf77a083e48] ...
	I1209 16:57:41.111307    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf77a083e48"
	I1209 16:57:41.125355    5393 logs.go:123] Gathering logs for kube-proxy [f638e9997f33] ...
	I1209 16:57:41.125369    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f638e9997f33"
	I1209 16:57:41.139837    5393 logs.go:123] Gathering logs for Docker ...
	I1209 16:57:41.139851    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1209 16:57:41.166230    5393 logs.go:123] Gathering logs for kubelet ...
	I1209 16:57:41.166252    5393 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1209 16:57:41.205572    5393 logs.go:123] Gathering logs for coredns [4ef5f9d1555b] ...
	I1209 16:57:41.205592    5393 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ef5f9d1555b"
	I1209 16:57:43.722095    5393 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1209 16:57:48.724324    5393 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1209 16:57:48.728864    5393 out.go:201] 
	W1209 16:57:48.732790    5393 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1209 16:57:48.732795    5393 out.go:270] * 
	* 
	W1209 16:57:48.733211    5393 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:57:48.748839    5393 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-632000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (575.81s)

                                                
                                    
x
+
TestPause/serial/Start (10.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-449000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-449000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.969183166s)

                                                
                                                
-- stdout --
	* [pause-449000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-449000" primary control-plane node in "pause-449000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-449000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-449000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-449000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-449000 -n pause-449000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-449000 -n pause-449000: exit status 7 (70.958542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-449000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-507000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-507000 --driver=qemu2 : exit status 80 (9.990111084s)

                                                
                                                
-- stdout --
	* [NoKubernetes-507000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-507000" primary control-plane node in "NoKubernetes-507000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-507000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-507000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-507000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-507000 -n NoKubernetes-507000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-507000 -n NoKubernetes-507000: exit status 7 (61.889584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-507000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-507000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-507000 --no-kubernetes --driver=qemu2 : exit status 80 (5.26276925s)

                                                
                                                
-- stdout --
	* [NoKubernetes-507000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-507000
	* Restarting existing qemu2 VM for "NoKubernetes-507000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-507000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-507000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-507000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-507000 -n NoKubernetes-507000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-507000 -n NoKubernetes-507000: exit status 7 (63.257542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-507000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-507000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-507000 --no-kubernetes --driver=qemu2 : exit status 80 (5.259720209s)

                                                
                                                
-- stdout --
	* [NoKubernetes-507000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-507000
	* Restarting existing qemu2 VM for "NoKubernetes-507000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-507000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-507000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-507000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-507000 -n NoKubernetes-507000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-507000 -n NoKubernetes-507000: exit status 7 (69.528584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-507000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-507000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-507000 --driver=qemu2 : exit status 80 (5.255149625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-507000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-507000
	* Restarting existing qemu2 VM for "NoKubernetes-507000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-507000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-507000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-507000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-507000 -n NoKubernetes-507000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-507000 -n NoKubernetes-507000: exit status 7 (35.35875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-507000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.944718958s)

                                                
                                                
-- stdout --
	* [auto-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-884000" primary control-plane node in "auto-884000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-884000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:56:06.896427    5821 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:56:06.896604    5821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:56:06.896611    5821 out.go:358] Setting ErrFile to fd 2...
	I1209 16:56:06.896613    5821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:56:06.896756    5821 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:56:06.897951    5821 out.go:352] Setting JSON to false
	I1209 16:56:06.915773    5821 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5136,"bootTime":1733787030,"procs":538,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:56:06.915856    5821 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:56:06.921997    5821 out.go:177] * [auto-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:56:06.929966    5821 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:56:06.930026    5821 notify.go:220] Checking for updates...
	I1209 16:56:06.936913    5821 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:56:06.940965    5821 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:56:06.944948    5821 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:56:06.947946    5821 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:56:06.950916    5821 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:56:06.954262    5821 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:56:06.954339    5821 config.go:182] Loaded profile config "stopped-upgrade-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:56:06.954385    5821 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:56:06.957839    5821 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:56:06.964927    5821 start.go:297] selected driver: qemu2
	I1209 16:56:06.964932    5821 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:56:06.964938    5821 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:56:06.967264    5821 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:56:06.971815    5821 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:56:06.974954    5821 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:56:06.974969    5821 cni.go:84] Creating CNI manager for ""
	I1209 16:56:06.974987    5821 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:56:06.974991    5821 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 16:56:06.975020    5821 start.go:340] cluster config:
	{Name:auto-884000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:56:06.979412    5821 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:56:06.986903    5821 out.go:177] * Starting "auto-884000" primary control-plane node in "auto-884000" cluster
	I1209 16:56:06.990951    5821 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:56:06.990966    5821 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:56:06.990978    5821 cache.go:56] Caching tarball of preloaded images
	I1209 16:56:06.991049    5821 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:56:06.991062    5821 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:56:06.991117    5821 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/auto-884000/config.json ...
	I1209 16:56:06.991133    5821 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/auto-884000/config.json: {Name:mk69e9365c121dc4919f0fcb0ab7ea36e6ca264e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:56:06.991568    5821 start.go:360] acquireMachinesLock for auto-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:56:06.991615    5821 start.go:364] duration metric: took 41.208µs to acquireMachinesLock for "auto-884000"
	I1209 16:56:06.991626    5821 start.go:93] Provisioning new machine with config: &{Name:auto-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:56:06.991650    5821 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:56:06.994954    5821 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:56:07.010507    5821 start.go:159] libmachine.API.Create for "auto-884000" (driver="qemu2")
	I1209 16:56:07.010552    5821 client.go:168] LocalClient.Create starting
	I1209 16:56:07.010627    5821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:56:07.010664    5821 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:07.010678    5821 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:07.010723    5821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:56:07.010751    5821 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:07.010759    5821 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:07.011211    5821 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:56:07.178867    5821 main.go:141] libmachine: Creating SSH key...
	I1209 16:56:07.339641    5821 main.go:141] libmachine: Creating Disk image...
	I1209 16:56:07.339666    5821 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:56:07.339925    5821 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/disk.qcow2
	I1209 16:56:07.350217    5821 main.go:141] libmachine: STDOUT: 
	I1209 16:56:07.350246    5821 main.go:141] libmachine: STDERR: 
	I1209 16:56:07.350300    5821 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/disk.qcow2 +20000M
	I1209 16:56:07.359130    5821 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:56:07.359147    5821 main.go:141] libmachine: STDERR: 
	I1209 16:56:07.359164    5821 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/disk.qcow2
	I1209 16:56:07.359170    5821 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:56:07.359185    5821 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:56:07.359213    5821 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:fe:92:d3:94:ab -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/disk.qcow2
	I1209 16:56:07.361011    5821 main.go:141] libmachine: STDOUT: 
	I1209 16:56:07.361025    5821 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:56:07.361044    5821 client.go:171] duration metric: took 350.48675ms to LocalClient.Create
	I1209 16:56:09.363242    5821 start.go:128] duration metric: took 2.371566542s to createHost
	I1209 16:56:09.363322    5821 start.go:83] releasing machines lock for "auto-884000", held for 2.371700042s
	W1209 16:56:09.363377    5821 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:56:09.375828    5821 out.go:177] * Deleting "auto-884000" in qemu2 ...
	W1209 16:56:09.400843    5821 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:56:09.400872    5821 start.go:729] Will try again in 5 seconds ...
	I1209 16:56:14.402982    5821 start.go:360] acquireMachinesLock for auto-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:56:14.403110    5821 start.go:364] duration metric: took 105.041µs to acquireMachinesLock for "auto-884000"
	I1209 16:56:14.403124    5821 start.go:93] Provisioning new machine with config: &{Name:auto-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:56:14.403180    5821 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:56:14.411341    5821 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:56:14.427144    5821 start.go:159] libmachine.API.Create for "auto-884000" (driver="qemu2")
	I1209 16:56:14.427175    5821 client.go:168] LocalClient.Create starting
	I1209 16:56:14.427268    5821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:56:14.427319    5821 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:14.427329    5821 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:14.427371    5821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:56:14.427402    5821 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:14.427410    5821 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:14.427842    5821 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:56:14.597622    5821 main.go:141] libmachine: Creating SSH key...
	I1209 16:56:14.743484    5821 main.go:141] libmachine: Creating Disk image...
	I1209 16:56:14.743495    5821 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:56:14.743737    5821 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/disk.qcow2
	I1209 16:56:14.753805    5821 main.go:141] libmachine: STDOUT: 
	I1209 16:56:14.753824    5821 main.go:141] libmachine: STDERR: 
	I1209 16:56:14.753887    5821 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/disk.qcow2 +20000M
	I1209 16:56:14.762655    5821 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:56:14.762682    5821 main.go:141] libmachine: STDERR: 
	I1209 16:56:14.762696    5821 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/disk.qcow2
	I1209 16:56:14.762701    5821 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:56:14.762712    5821 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:56:14.762738    5821 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:39:54:11:06:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/auto-884000/disk.qcow2
	I1209 16:56:14.764641    5821 main.go:141] libmachine: STDOUT: 
	I1209 16:56:14.764657    5821 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:56:14.764670    5821 client.go:171] duration metric: took 337.489917ms to LocalClient.Create
	I1209 16:56:16.766113    5821 start.go:128] duration metric: took 2.362921417s to createHost
	I1209 16:56:16.766161    5821 start.go:83] releasing machines lock for "auto-884000", held for 2.3630455s
	W1209 16:56:16.766406    5821 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:56:16.774674    5821 out.go:201] 
	W1209 16:56:16.787719    5821 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:56:16.787740    5821 out.go:270] * 
	* 
	W1209 16:56:16.789221    5821 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:56:16.798700    5821 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.920080583s)

                                                
                                                
-- stdout --
	* [flannel-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-884000" primary control-plane node in "flannel-884000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-884000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:56:19.198685    5936 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:56:19.198844    5936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:56:19.198847    5936 out.go:358] Setting ErrFile to fd 2...
	I1209 16:56:19.198849    5936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:56:19.198978    5936 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:56:19.200124    5936 out.go:352] Setting JSON to false
	I1209 16:56:19.218217    5936 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5149,"bootTime":1733787030,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:56:19.218295    5936 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:56:19.224478    5936 out.go:177] * [flannel-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:56:19.232441    5936 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:56:19.232497    5936 notify.go:220] Checking for updates...
	I1209 16:56:19.240412    5936 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:56:19.243404    5936 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:56:19.247400    5936 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:56:19.250426    5936 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:56:19.253445    5936 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:56:19.256808    5936 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:56:19.256885    5936 config.go:182] Loaded profile config "stopped-upgrade-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:56:19.256936    5936 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:56:19.260470    5936 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:56:19.267325    5936 start.go:297] selected driver: qemu2
	I1209 16:56:19.267331    5936 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:56:19.267339    5936 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:56:19.269771    5936 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:56:19.272436    5936 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:56:19.276489    5936 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:56:19.276515    5936 cni.go:84] Creating CNI manager for "flannel"
	I1209 16:56:19.276518    5936 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1209 16:56:19.276566    5936 start.go:340] cluster config:
	{Name:flannel-884000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:flannel-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:56:19.280811    5936 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:56:19.289462    5936 out.go:177] * Starting "flannel-884000" primary control-plane node in "flannel-884000" cluster
	I1209 16:56:19.292399    5936 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:56:19.292410    5936 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:56:19.292424    5936 cache.go:56] Caching tarball of preloaded images
	I1209 16:56:19.292483    5936 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:56:19.292487    5936 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:56:19.292535    5936 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/flannel-884000/config.json ...
	I1209 16:56:19.292545    5936 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/flannel-884000/config.json: {Name:mk4a4116891f6da917375725668711227061805d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:56:19.293013    5936 start.go:360] acquireMachinesLock for flannel-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:56:19.293064    5936 start.go:364] duration metric: took 44.792µs to acquireMachinesLock for "flannel-884000"
	I1209 16:56:19.293075    5936 start.go:93] Provisioning new machine with config: &{Name:flannel-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:56:19.293102    5936 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:56:19.297400    5936 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:56:19.312160    5936 start.go:159] libmachine.API.Create for "flannel-884000" (driver="qemu2")
	I1209 16:56:19.312188    5936 client.go:168] LocalClient.Create starting
	I1209 16:56:19.312258    5936 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:56:19.312296    5936 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:19.312309    5936 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:19.312351    5936 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:56:19.312379    5936 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:19.312391    5936 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:19.312755    5936 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:56:19.481072    5936 main.go:141] libmachine: Creating SSH key...
	I1209 16:56:19.586513    5936 main.go:141] libmachine: Creating Disk image...
	I1209 16:56:19.586521    5936 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:56:19.586739    5936 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/disk.qcow2
	I1209 16:56:19.596887    5936 main.go:141] libmachine: STDOUT: 
	I1209 16:56:19.596914    5936 main.go:141] libmachine: STDERR: 
	I1209 16:56:19.596982    5936 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/disk.qcow2 +20000M
	I1209 16:56:19.605573    5936 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:56:19.605587    5936 main.go:141] libmachine: STDERR: 
	I1209 16:56:19.605607    5936 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/disk.qcow2
	I1209 16:56:19.605612    5936 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:56:19.605624    5936 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:56:19.605654    5936 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:cc:1e:df:35:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/disk.qcow2
	I1209 16:56:19.607569    5936 main.go:141] libmachine: STDOUT: 
	I1209 16:56:19.607583    5936 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:56:19.607609    5936 client.go:171] duration metric: took 295.415459ms to LocalClient.Create
	I1209 16:56:21.609822    5936 start.go:128] duration metric: took 2.31669325s to createHost
	I1209 16:56:21.609972    5936 start.go:83] releasing machines lock for "flannel-884000", held for 2.316866708s
	W1209 16:56:21.610043    5936 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:56:21.617381    5936 out.go:177] * Deleting "flannel-884000" in qemu2 ...
	W1209 16:56:21.662439    5936 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:56:21.662470    5936 start.go:729] Will try again in 5 seconds ...
	I1209 16:56:26.664667    5936 start.go:360] acquireMachinesLock for flannel-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:56:26.664916    5936 start.go:364] duration metric: took 197.708µs to acquireMachinesLock for "flannel-884000"
	I1209 16:56:26.664970    5936 start.go:93] Provisioning new machine with config: &{Name:flannel-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:56:26.665056    5936 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:56:26.674294    5936 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:56:26.692952    5936 start.go:159] libmachine.API.Create for "flannel-884000" (driver="qemu2")
	I1209 16:56:26.692985    5936 client.go:168] LocalClient.Create starting
	I1209 16:56:26.693077    5936 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:56:26.693137    5936 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:26.693153    5936 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:26.693196    5936 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:56:26.693230    5936 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:26.693236    5936 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:26.693834    5936 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:56:26.862511    5936 main.go:141] libmachine: Creating SSH key...
	I1209 16:56:27.016850    5936 main.go:141] libmachine: Creating Disk image...
	I1209 16:56:27.016861    5936 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:56:27.017121    5936 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/disk.qcow2
	I1209 16:56:27.027664    5936 main.go:141] libmachine: STDOUT: 
	I1209 16:56:27.027684    5936 main.go:141] libmachine: STDERR: 
	I1209 16:56:27.027744    5936 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/disk.qcow2 +20000M
	I1209 16:56:27.036492    5936 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:56:27.036508    5936 main.go:141] libmachine: STDERR: 
	I1209 16:56:27.036529    5936 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/disk.qcow2
	I1209 16:56:27.036533    5936 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:56:27.036541    5936 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:56:27.036568    5936 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:21:2d:f7:57:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/flannel-884000/disk.qcow2
	I1209 16:56:27.038430    5936 main.go:141] libmachine: STDOUT: 
	I1209 16:56:27.038444    5936 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:56:27.038461    5936 client.go:171] duration metric: took 345.473166ms to LocalClient.Create
	I1209 16:56:29.040662    5936 start.go:128] duration metric: took 2.375571875s to createHost
	I1209 16:56:29.040773    5936 start.go:83] releasing machines lock for "flannel-884000", held for 2.3758455s
	W1209 16:56:29.041108    5936 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:56:29.056751    5936 out.go:201] 
	W1209 16:56:29.059928    5936 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:56:29.059964    5936 out.go:270] * 
	* 
	W1209 16:56:29.062847    5936 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:56:29.074751    5936 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
E1209 16:56:39.910818    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.130538375s)

                                                
                                                
-- stdout --
	* [kindnet-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-884000" primary control-plane node in "kindnet-884000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-884000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:56:31.646855    6061 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:56:31.646998    6061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:56:31.647001    6061 out.go:358] Setting ErrFile to fd 2...
	I1209 16:56:31.647003    6061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:56:31.647130    6061 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:56:31.648244    6061 out.go:352] Setting JSON to false
	I1209 16:56:31.666727    6061 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5161,"bootTime":1733787030,"procs":538,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:56:31.666824    6061 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:56:31.671558    6061 out.go:177] * [kindnet-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:56:31.679589    6061 notify.go:220] Checking for updates...
	I1209 16:56:31.683520    6061 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:56:31.699492    6061 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:56:31.703494    6061 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:56:31.707472    6061 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:56:31.711424    6061 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:56:31.712871    6061 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:56:31.716784    6061 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:56:31.716859    6061 config.go:182] Loaded profile config "stopped-upgrade-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:56:31.716903    6061 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:56:31.719471    6061 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:56:31.725526    6061 start.go:297] selected driver: qemu2
	I1209 16:56:31.725531    6061 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:56:31.725536    6061 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:56:31.727967    6061 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:56:31.732462    6061 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:56:31.733995    6061 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:56:31.734020    6061 cni.go:84] Creating CNI manager for "kindnet"
	I1209 16:56:31.734023    6061 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1209 16:56:31.734058    6061 start.go:340] cluster config:
	{Name:kindnet-884000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kindnet-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:56:31.738285    6061 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:56:31.746539    6061 out.go:177] * Starting "kindnet-884000" primary control-plane node in "kindnet-884000" cluster
	I1209 16:56:31.750487    6061 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:56:31.750511    6061 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:56:31.750525    6061 cache.go:56] Caching tarball of preloaded images
	I1209 16:56:31.750605    6061 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:56:31.750610    6061 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:56:31.750677    6061 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/kindnet-884000/config.json ...
	I1209 16:56:31.750687    6061 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/kindnet-884000/config.json: {Name:mk9891da6b2742e3314aa53ac1e43f9bd34c1007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:56:31.751137    6061 start.go:360] acquireMachinesLock for kindnet-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:56:31.751180    6061 start.go:364] duration metric: took 37.416µs to acquireMachinesLock for "kindnet-884000"
	I1209 16:56:31.751195    6061 start.go:93] Provisioning new machine with config: &{Name:kindnet-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:56:31.751227    6061 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:56:31.755330    6061 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:56:31.769707    6061 start.go:159] libmachine.API.Create for "kindnet-884000" (driver="qemu2")
	I1209 16:56:31.769741    6061 client.go:168] LocalClient.Create starting
	I1209 16:56:31.769806    6061 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:56:31.769842    6061 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:31.769855    6061 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:31.769907    6061 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:56:31.769935    6061 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:31.769943    6061 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:31.770344    6061 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:56:31.938145    6061 main.go:141] libmachine: Creating SSH key...
	I1209 16:56:32.038104    6061 main.go:141] libmachine: Creating Disk image...
	I1209 16:56:32.038115    6061 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:56:32.038328    6061 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/disk.qcow2
	I1209 16:56:32.048301    6061 main.go:141] libmachine: STDOUT: 
	I1209 16:56:32.048320    6061 main.go:141] libmachine: STDERR: 
	I1209 16:56:32.048382    6061 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/disk.qcow2 +20000M
	I1209 16:56:32.057061    6061 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:56:32.057076    6061 main.go:141] libmachine: STDERR: 
	I1209 16:56:32.057092    6061 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/disk.qcow2
	I1209 16:56:32.057097    6061 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:56:32.057110    6061 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:56:32.057140    6061 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:37:49:c6:a1:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/disk.qcow2
	I1209 16:56:32.058948    6061 main.go:141] libmachine: STDOUT: 
	I1209 16:56:32.058963    6061 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:56:32.058984    6061 client.go:171] duration metric: took 289.236542ms to LocalClient.Create
	I1209 16:56:34.061225    6061 start.go:128] duration metric: took 2.309966417s to createHost
	I1209 16:56:34.061325    6061 start.go:83] releasing machines lock for "kindnet-884000", held for 2.310137875s
	W1209 16:56:34.061372    6061 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:56:34.080599    6061 out.go:177] * Deleting "kindnet-884000" in qemu2 ...
	W1209 16:56:34.112075    6061 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:56:34.112105    6061 start.go:729] Will try again in 5 seconds ...
	I1209 16:56:39.114338    6061 start.go:360] acquireMachinesLock for kindnet-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:56:39.114828    6061 start.go:364] duration metric: took 388.167µs to acquireMachinesLock for "kindnet-884000"
	I1209 16:56:39.114934    6061 start.go:93] Provisioning new machine with config: &{Name:kindnet-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:56:39.115113    6061 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:56:39.123524    6061 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:56:39.161187    6061 start.go:159] libmachine.API.Create for "kindnet-884000" (driver="qemu2")
	I1209 16:56:39.161251    6061 client.go:168] LocalClient.Create starting
	I1209 16:56:39.161366    6061 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:56:39.161442    6061 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:39.161457    6061 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:39.161533    6061 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:56:39.161583    6061 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:39.161595    6061 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:39.162434    6061 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:56:39.337418    6061 main.go:141] libmachine: Creating SSH key...
	I1209 16:56:39.677418    6061 main.go:141] libmachine: Creating Disk image...
	I1209 16:56:39.677430    6061 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:56:39.677666    6061 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/disk.qcow2
	I1209 16:56:39.688123    6061 main.go:141] libmachine: STDOUT: 
	I1209 16:56:39.688152    6061 main.go:141] libmachine: STDERR: 
	I1209 16:56:39.688229    6061 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/disk.qcow2 +20000M
	I1209 16:56:39.697215    6061 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:56:39.697231    6061 main.go:141] libmachine: STDERR: 
	I1209 16:56:39.697250    6061 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/disk.qcow2
	I1209 16:56:39.697257    6061 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:56:39.697270    6061 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:56:39.697304    6061 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:a2:4c:27:fb:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kindnet-884000/disk.qcow2
	I1209 16:56:39.699234    6061 main.go:141] libmachine: STDOUT: 
	I1209 16:56:39.699248    6061 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:56:39.699265    6061 client.go:171] duration metric: took 538.010333ms to LocalClient.Create
	I1209 16:56:41.701387    6061 start.go:128] duration metric: took 2.586256125s to createHost
	I1209 16:56:41.701434    6061 start.go:83] releasing machines lock for "kindnet-884000", held for 2.586591708s
	W1209 16:56:41.701639    6061 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:56:41.716312    6061 out.go:201] 
	W1209 16:56:41.720249    6061 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:56:41.720269    6061 out.go:270] * 
	* 
	W1209 16:56:41.721871    6061 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:56:41.732288    6061 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (10.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (10.035772s)

                                                
                                                
-- stdout --
	* [enable-default-cni-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-884000" primary control-plane node in "enable-default-cni-884000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-884000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:56:44.219893    6183 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:56:44.220041    6183 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:56:44.220043    6183 out.go:358] Setting ErrFile to fd 2...
	I1209 16:56:44.220046    6183 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:56:44.220184    6183 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:56:44.221445    6183 out.go:352] Setting JSON to false
	I1209 16:56:44.239417    6183 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5174,"bootTime":1733787030,"procs":539,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:56:44.239502    6183 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:56:44.245895    6183 out.go:177] * [enable-default-cni-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:56:44.252843    6183 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:56:44.252879    6183 notify.go:220] Checking for updates...
	I1209 16:56:44.261727    6183 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:56:44.264799    6183 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:56:44.268799    6183 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:56:44.271811    6183 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:56:44.274823    6183 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:56:44.278205    6183 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:56:44.278287    6183 config.go:182] Loaded profile config "stopped-upgrade-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:56:44.278333    6183 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:56:44.282793    6183 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:56:44.289865    6183 start.go:297] selected driver: qemu2
	I1209 16:56:44.289872    6183 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:56:44.289881    6183 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:56:44.292343    6183 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:56:44.295803    6183 out.go:177] * Automatically selected the socket_vmnet network
	E1209 16:56:44.299776    6183 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1209 16:56:44.299787    6183 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:56:44.299802    6183 cni.go:84] Creating CNI manager for "bridge"
	I1209 16:56:44.299805    6183 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 16:56:44.299831    6183 start.go:340] cluster config:
	{Name:enable-default-cni-884000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:56:44.304141    6183 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:56:44.310797    6183 out.go:177] * Starting "enable-default-cni-884000" primary control-plane node in "enable-default-cni-884000" cluster
	I1209 16:56:44.314838    6183 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:56:44.314853    6183 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:56:44.314865    6183 cache.go:56] Caching tarball of preloaded images
	I1209 16:56:44.314933    6183 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:56:44.314939    6183 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:56:44.315004    6183 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/enable-default-cni-884000/config.json ...
	I1209 16:56:44.315014    6183 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/enable-default-cni-884000/config.json: {Name:mk223ec4b0fb2b5a8c0c466c24aa34a6a2eb27d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:56:44.315256    6183 start.go:360] acquireMachinesLock for enable-default-cni-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:56:44.315299    6183 start.go:364] duration metric: took 36.25µs to acquireMachinesLock for "enable-default-cni-884000"
	I1209 16:56:44.315310    6183 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:56:44.315355    6183 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:56:44.323771    6183 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:56:44.338151    6183 start.go:159] libmachine.API.Create for "enable-default-cni-884000" (driver="qemu2")
	I1209 16:56:44.338176    6183 client.go:168] LocalClient.Create starting
	I1209 16:56:44.338243    6183 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:56:44.338288    6183 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:44.338299    6183 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:44.338333    6183 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:56:44.338360    6183 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:44.338367    6183 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:44.338748    6183 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:56:44.507616    6183 main.go:141] libmachine: Creating SSH key...
	I1209 16:56:44.804953    6183 main.go:141] libmachine: Creating Disk image...
	I1209 16:56:44.804962    6183 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:56:44.805197    6183 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/disk.qcow2
	I1209 16:56:44.815516    6183 main.go:141] libmachine: STDOUT: 
	I1209 16:56:44.815541    6183 main.go:141] libmachine: STDERR: 
	I1209 16:56:44.815617    6183 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/disk.qcow2 +20000M
	I1209 16:56:44.824376    6183 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:56:44.824391    6183 main.go:141] libmachine: STDERR: 
	I1209 16:56:44.824408    6183 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/disk.qcow2
	I1209 16:56:44.824413    6183 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:56:44.824427    6183 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:56:44.824463    6183 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:a7:7a:41:1d:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/disk.qcow2
	I1209 16:56:44.826467    6183 main.go:141] libmachine: STDOUT: 
	I1209 16:56:44.826482    6183 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:56:44.826503    6183 client.go:171] duration metric: took 488.321167ms to LocalClient.Create
	I1209 16:56:46.828700    6183 start.go:128] duration metric: took 2.51331425s to createHost
	I1209 16:56:46.828791    6183 start.go:83] releasing machines lock for "enable-default-cni-884000", held for 2.513485792s
	W1209 16:56:46.828836    6183 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:56:46.838901    6183 out.go:177] * Deleting "enable-default-cni-884000" in qemu2 ...
	W1209 16:56:46.870458    6183 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:56:46.870483    6183 start.go:729] Will try again in 5 seconds ...
	I1209 16:56:51.872615    6183 start.go:360] acquireMachinesLock for enable-default-cni-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:56:51.872791    6183 start.go:364] duration metric: took 139.083µs to acquireMachinesLock for "enable-default-cni-884000"
	I1209 16:56:51.872834    6183 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:56:51.872881    6183 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:56:51.883095    6183 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:56:51.898312    6183 start.go:159] libmachine.API.Create for "enable-default-cni-884000" (driver="qemu2")
	I1209 16:56:51.898346    6183 client.go:168] LocalClient.Create starting
	I1209 16:56:51.898415    6183 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:56:51.898466    6183 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:51.898476    6183 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:51.898512    6183 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:56:51.898541    6183 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:51.898548    6183 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:51.899075    6183 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:56:52.067558    6183 main.go:141] libmachine: Creating SSH key...
	I1209 16:56:52.147672    6183 main.go:141] libmachine: Creating Disk image...
	I1209 16:56:52.147680    6183 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:56:52.147908    6183 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/disk.qcow2
	I1209 16:56:52.158196    6183 main.go:141] libmachine: STDOUT: 
	I1209 16:56:52.158216    6183 main.go:141] libmachine: STDERR: 
	I1209 16:56:52.158272    6183 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/disk.qcow2 +20000M
	I1209 16:56:52.167116    6183 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:56:52.167140    6183 main.go:141] libmachine: STDERR: 
	I1209 16:56:52.167155    6183 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/disk.qcow2
	I1209 16:56:52.167162    6183 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:56:52.167173    6183 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:56:52.167221    6183 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:28:94:61:a6:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/enable-default-cni-884000/disk.qcow2
	I1209 16:56:52.169173    6183 main.go:141] libmachine: STDOUT: 
	I1209 16:56:52.169188    6183 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:56:52.169200    6183 client.go:171] duration metric: took 270.851333ms to LocalClient.Create
	I1209 16:56:54.171295    6183 start.go:128] duration metric: took 2.298402542s to createHost
	I1209 16:56:54.171340    6183 start.go:83] releasing machines lock for "enable-default-cni-884000", held for 2.298544375s
	W1209 16:56:54.171556    6183 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:56:54.189096    6183 out.go:201] 
	W1209 16:56:54.193986    6183 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:56:54.193999    6183 out.go:270] * 
	* 
	W1209 16:56:54.195304    6183 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:56:54.210030    6183 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (10.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.860311125s)

                                                
                                                
-- stdout --
	* [bridge-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-884000" primary control-plane node in "bridge-884000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-884000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:56:56.576254    6299 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:56:56.576403    6299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:56:56.576406    6299 out.go:358] Setting ErrFile to fd 2...
	I1209 16:56:56.576408    6299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:56:56.576557    6299 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:56:56.577694    6299 out.go:352] Setting JSON to false
	I1209 16:56:56.596490    6299 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5186,"bootTime":1733787030,"procs":537,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:56:56.596561    6299 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:56:56.603567    6299 out.go:177] * [bridge-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:56:56.611497    6299 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:56:56.611555    6299 notify.go:220] Checking for updates...
	I1209 16:56:56.617468    6299 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:56:56.620450    6299 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:56:56.623434    6299 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:56:56.626412    6299 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:56:56.629473    6299 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:56:56.631442    6299 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:56:56.631514    6299 config.go:182] Loaded profile config "stopped-upgrade-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:56:56.631562    6299 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:56:56.635437    6299 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:56:56.642246    6299 start.go:297] selected driver: qemu2
	I1209 16:56:56.642253    6299 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:56:56.642258    6299 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:56:56.644585    6299 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:56:56.648423    6299 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:56:56.651563    6299 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:56:56.651584    6299 cni.go:84] Creating CNI manager for "bridge"
	I1209 16:56:56.651591    6299 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 16:56:56.651624    6299 start.go:340] cluster config:
	{Name:bridge-884000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:bridge-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:56:56.655984    6299 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:56:56.664417    6299 out.go:177] * Starting "bridge-884000" primary control-plane node in "bridge-884000" cluster
	I1209 16:56:56.668371    6299 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:56:56.668385    6299 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:56:56.668393    6299 cache.go:56] Caching tarball of preloaded images
	I1209 16:56:56.668470    6299 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:56:56.668476    6299 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:56:56.668525    6299 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/bridge-884000/config.json ...
	I1209 16:56:56.668535    6299 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/bridge-884000/config.json: {Name:mkfe00a1f79304e1b232b8cbba3f506cbef76c45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:56:56.668778    6299 start.go:360] acquireMachinesLock for bridge-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:56:56.668820    6299 start.go:364] duration metric: took 36.625µs to acquireMachinesLock for "bridge-884000"
	I1209 16:56:56.668831    6299 start.go:93] Provisioning new machine with config: &{Name:bridge-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:56:56.668866    6299 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:56:56.677312    6299 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:56:56.691985    6299 start.go:159] libmachine.API.Create for "bridge-884000" (driver="qemu2")
	I1209 16:56:56.692013    6299 client.go:168] LocalClient.Create starting
	I1209 16:56:56.692093    6299 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:56:56.692130    6299 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:56.692142    6299 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:56.692179    6299 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:56:56.692207    6299 main.go:141] libmachine: Decoding PEM data...
	I1209 16:56:56.692218    6299 main.go:141] libmachine: Parsing certificate...
	I1209 16:56:56.692649    6299 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:56:56.860052    6299 main.go:141] libmachine: Creating SSH key...
	I1209 16:56:56.950520    6299 main.go:141] libmachine: Creating Disk image...
	I1209 16:56:56.950527    6299 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:56:56.950773    6299 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/disk.qcow2
	I1209 16:56:56.960799    6299 main.go:141] libmachine: STDOUT: 
	I1209 16:56:56.960817    6299 main.go:141] libmachine: STDERR: 
	I1209 16:56:56.960871    6299 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/disk.qcow2 +20000M
	I1209 16:56:56.969495    6299 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:56:56.969520    6299 main.go:141] libmachine: STDERR: 
	I1209 16:56:56.969542    6299 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/disk.qcow2
	I1209 16:56:56.969548    6299 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:56:56.969560    6299 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:56:56.969597    6299 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:ae:4f:21:6e:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/disk.qcow2
	I1209 16:56:56.971470    6299 main.go:141] libmachine: STDOUT: 
	I1209 16:56:56.971484    6299 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:56:56.971503    6299 client.go:171] duration metric: took 279.482917ms to LocalClient.Create
	I1209 16:56:58.973715    6299 start.go:128] duration metric: took 2.304821042s to createHost
	I1209 16:56:58.973789    6299 start.go:83] releasing machines lock for "bridge-884000", held for 2.304960083s
	W1209 16:56:58.973842    6299 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:56:58.987858    6299 out.go:177] * Deleting "bridge-884000" in qemu2 ...
	W1209 16:56:59.018291    6299 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:56:59.018316    6299 start.go:729] Will try again in 5 seconds ...
	I1209 16:57:04.020589    6299 start.go:360] acquireMachinesLock for bridge-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:57:04.021309    6299 start.go:364] duration metric: took 568.708µs to acquireMachinesLock for "bridge-884000"
	I1209 16:57:04.021455    6299 start.go:93] Provisioning new machine with config: &{Name:bridge-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:57:04.021857    6299 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:57:04.030585    6299 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:57:04.072623    6299 start.go:159] libmachine.API.Create for "bridge-884000" (driver="qemu2")
	I1209 16:57:04.072681    6299 client.go:168] LocalClient.Create starting
	I1209 16:57:04.072822    6299 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:57:04.072899    6299 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:04.072919    6299 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:04.072979    6299 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:57:04.073037    6299 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:04.073048    6299 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:04.074024    6299 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:57:04.252439    6299 main.go:141] libmachine: Creating SSH key...
	I1209 16:57:04.333519    6299 main.go:141] libmachine: Creating Disk image...
	I1209 16:57:04.333529    6299 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:57:04.333756    6299 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/disk.qcow2
	I1209 16:57:04.343845    6299 main.go:141] libmachine: STDOUT: 
	I1209 16:57:04.343862    6299 main.go:141] libmachine: STDERR: 
	I1209 16:57:04.343927    6299 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/disk.qcow2 +20000M
	I1209 16:57:04.352513    6299 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:57:04.352536    6299 main.go:141] libmachine: STDERR: 
	I1209 16:57:04.352553    6299 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/disk.qcow2
	I1209 16:57:04.352558    6299 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:57:04.352565    6299 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:57:04.352592    6299 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:69:c2:c3:1d:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/bridge-884000/disk.qcow2
	I1209 16:57:04.354492    6299 main.go:141] libmachine: STDOUT: 
	I1209 16:57:04.354507    6299 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:57:04.354521    6299 client.go:171] duration metric: took 281.833792ms to LocalClient.Create
	I1209 16:57:06.356728    6299 start.go:128] duration metric: took 2.334820917s to createHost
	I1209 16:57:06.356805    6299 start.go:83] releasing machines lock for "bridge-884000", held for 2.335455333s
	W1209 16:57:06.357297    6299 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:57:06.366021    6299 out.go:201] 
	W1209 16:57:06.378151    6299 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:57:06.378183    6299 out.go:270] * 
	* 
	W1209 16:57:06.380946    6299 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:57:06.391040    6299 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (10.092183292s)

                                                
                                                
-- stdout --
	* [kubenet-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-884000" primary control-plane node in "kubenet-884000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-884000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:57:08.772311    6414 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:57:08.772477    6414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:57:08.772481    6414 out.go:358] Setting ErrFile to fd 2...
	I1209 16:57:08.772484    6414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:57:08.772618    6414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:57:08.773832    6414 out.go:352] Setting JSON to false
	I1209 16:57:08.791983    6414 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5198,"bootTime":1733787030,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:57:08.792066    6414 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:57:08.798461    6414 out.go:177] * [kubenet-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:57:08.806417    6414 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:57:08.806476    6414 notify.go:220] Checking for updates...
	I1209 16:57:08.814454    6414 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:57:08.817386    6414 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:57:08.821412    6414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:57:08.824427    6414 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:57:08.827478    6414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:57:08.830734    6414 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:57:08.830809    6414 config.go:182] Loaded profile config "stopped-upgrade-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:57:08.830857    6414 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:57:08.834483    6414 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:57:08.841436    6414 start.go:297] selected driver: qemu2
	I1209 16:57:08.841442    6414 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:57:08.841451    6414 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:57:08.843916    6414 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:57:08.847447    6414 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:57:08.851472    6414 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:57:08.851499    6414 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1209 16:57:08.851520    6414 start.go:340] cluster config:
	{Name:kubenet-884000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubenet-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:57:08.855951    6414 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:57:08.864429    6414 out.go:177] * Starting "kubenet-884000" primary control-plane node in "kubenet-884000" cluster
	I1209 16:57:08.868450    6414 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:57:08.868465    6414 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:57:08.868477    6414 cache.go:56] Caching tarball of preloaded images
	I1209 16:57:08.868544    6414 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:57:08.868550    6414 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:57:08.868613    6414 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/kubenet-884000/config.json ...
	I1209 16:57:08.868625    6414 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/kubenet-884000/config.json: {Name:mkd0c5a735931160d2d65f98bfe510fa38718dc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:57:08.868957    6414 start.go:360] acquireMachinesLock for kubenet-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:57:08.869001    6414 start.go:364] duration metric: took 38.667µs to acquireMachinesLock for "kubenet-884000"
	I1209 16:57:08.869012    6414 start.go:93] Provisioning new machine with config: &{Name:kubenet-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:57:08.869039    6414 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:57:08.872364    6414 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:57:08.887137    6414 start.go:159] libmachine.API.Create for "kubenet-884000" (driver="qemu2")
	I1209 16:57:08.887164    6414 client.go:168] LocalClient.Create starting
	I1209 16:57:08.887231    6414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:57:08.887269    6414 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:08.887283    6414 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:08.887321    6414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:57:08.887349    6414 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:08.887360    6414 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:08.887727    6414 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:57:09.055632    6414 main.go:141] libmachine: Creating SSH key...
	I1209 16:57:09.229961    6414 main.go:141] libmachine: Creating Disk image...
	I1209 16:57:09.229970    6414 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:57:09.230199    6414 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/disk.qcow2
	I1209 16:57:09.240238    6414 main.go:141] libmachine: STDOUT: 
	I1209 16:57:09.240263    6414 main.go:141] libmachine: STDERR: 
	I1209 16:57:09.240344    6414 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/disk.qcow2 +20000M
	I1209 16:57:09.248933    6414 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:57:09.248949    6414 main.go:141] libmachine: STDERR: 
	I1209 16:57:09.248963    6414 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/disk.qcow2
	I1209 16:57:09.248968    6414 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:57:09.248985    6414 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:57:09.249014    6414 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:e6:04:e8:ed:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/disk.qcow2
	I1209 16:57:09.251046    6414 main.go:141] libmachine: STDOUT: 
	I1209 16:57:09.251063    6414 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:57:09.251098    6414 client.go:171] duration metric: took 363.924708ms to LocalClient.Create
	I1209 16:57:11.253202    6414 start.go:128] duration metric: took 2.384147542s to createHost
	I1209 16:57:11.253233    6414 start.go:83] releasing machines lock for "kubenet-884000", held for 2.384229s
	W1209 16:57:11.253279    6414 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:57:11.263250    6414 out.go:177] * Deleting "kubenet-884000" in qemu2 ...
	W1209 16:57:11.292496    6414 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:57:11.292506    6414 start.go:729] Will try again in 5 seconds ...
	I1209 16:57:16.294796    6414 start.go:360] acquireMachinesLock for kubenet-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:57:16.295254    6414 start.go:364] duration metric: took 365.458µs to acquireMachinesLock for "kubenet-884000"
	I1209 16:57:16.295396    6414 start.go:93] Provisioning new machine with config: &{Name:kubenet-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:57:16.295704    6414 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:57:16.308381    6414 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:57:16.351049    6414 start.go:159] libmachine.API.Create for "kubenet-884000" (driver="qemu2")
	I1209 16:57:16.351099    6414 client.go:168] LocalClient.Create starting
	I1209 16:57:16.351247    6414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:57:16.351327    6414 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:16.351345    6414 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:16.351411    6414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:57:16.351468    6414 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:16.351483    6414 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:16.352236    6414 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:57:16.531033    6414 main.go:141] libmachine: Creating SSH key...
	I1209 16:57:16.753984    6414 main.go:141] libmachine: Creating Disk image...
	I1209 16:57:16.753999    6414 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:57:16.754274    6414 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/disk.qcow2
	I1209 16:57:16.765087    6414 main.go:141] libmachine: STDOUT: 
	I1209 16:57:16.765113    6414 main.go:141] libmachine: STDERR: 
	I1209 16:57:16.765177    6414 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/disk.qcow2 +20000M
	I1209 16:57:16.774334    6414 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:57:16.774351    6414 main.go:141] libmachine: STDERR: 
	I1209 16:57:16.774368    6414 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/disk.qcow2
	I1209 16:57:16.774372    6414 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:57:16.774380    6414 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:57:16.774407    6414 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:8f:a1:bf:47:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/kubenet-884000/disk.qcow2
	I1209 16:57:16.776356    6414 main.go:141] libmachine: STDOUT: 
	I1209 16:57:16.776370    6414 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:57:16.776386    6414 client.go:171] duration metric: took 425.282334ms to LocalClient.Create
	I1209 16:57:18.777016    6414 start.go:128] duration metric: took 2.48128475s to createHost
	I1209 16:57:18.777068    6414 start.go:83] releasing machines lock for "kubenet-884000", held for 2.48179025s
	W1209 16:57:18.777334    6414 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:57:18.794017    6414 out.go:201] 
	W1209 16:57:18.806151    6414 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:57:18.806206    6414 out.go:270] * 
	* 
	W1209 16:57:18.809033    6414 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:57:18.817010    6414 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.873346042s)

                                                
                                                
-- stdout --
	* [custom-flannel-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-884000" primary control-plane node in "custom-flannel-884000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-884000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:57:21.206342    6527 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:57:21.206510    6527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:57:21.206518    6527 out.go:358] Setting ErrFile to fd 2...
	I1209 16:57:21.206520    6527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:57:21.206678    6527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:57:21.207919    6527 out.go:352] Setting JSON to false
	I1209 16:57:21.226383    6527 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5211,"bootTime":1733787030,"procs":535,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:57:21.226468    6527 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:57:21.233168    6527 out.go:177] * [custom-flannel-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:57:21.241770    6527 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:57:21.241793    6527 notify.go:220] Checking for updates...
	I1209 16:57:21.249887    6527 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:57:21.252856    6527 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:57:21.255950    6527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:57:21.258982    6527 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:57:21.261940    6527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:57:21.265297    6527 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:57:21.265365    6527 config.go:182] Loaded profile config "stopped-upgrade-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:57:21.265424    6527 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:57:21.268935    6527 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:57:21.277873    6527 start.go:297] selected driver: qemu2
	I1209 16:57:21.277879    6527 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:57:21.277887    6527 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:57:21.280390    6527 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:57:21.283966    6527 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:57:21.286955    6527 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:57:21.286969    6527 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1209 16:57:21.286986    6527 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1209 16:57:21.287019    6527 start.go:340] cluster config:
	{Name:custom-flannel-884000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:57:21.291367    6527 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:57:21.298948    6527 out.go:177] * Starting "custom-flannel-884000" primary control-plane node in "custom-flannel-884000" cluster
	I1209 16:57:21.302962    6527 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:57:21.302978    6527 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:57:21.302990    6527 cache.go:56] Caching tarball of preloaded images
	I1209 16:57:21.303080    6527 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:57:21.303085    6527 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:57:21.303141    6527 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/custom-flannel-884000/config.json ...
	I1209 16:57:21.303155    6527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/custom-flannel-884000/config.json: {Name:mk5daaa283f4c7a7d490f472dae3e7305857a3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:57:21.303471    6527 start.go:360] acquireMachinesLock for custom-flannel-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:57:21.303516    6527 start.go:364] duration metric: took 36.583µs to acquireMachinesLock for "custom-flannel-884000"
	I1209 16:57:21.303526    6527 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:57:21.303547    6527 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:57:21.306895    6527 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:57:21.321614    6527 start.go:159] libmachine.API.Create for "custom-flannel-884000" (driver="qemu2")
	I1209 16:57:21.321640    6527 client.go:168] LocalClient.Create starting
	I1209 16:57:21.321701    6527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:57:21.321738    6527 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:21.321750    6527 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:21.321788    6527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:57:21.321816    6527 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:21.321827    6527 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:21.322177    6527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:57:21.489689    6527 main.go:141] libmachine: Creating SSH key...
	I1209 16:57:21.532893    6527 main.go:141] libmachine: Creating Disk image...
	I1209 16:57:21.532901    6527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:57:21.533111    6527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/disk.qcow2
	I1209 16:57:21.543037    6527 main.go:141] libmachine: STDOUT: 
	I1209 16:57:21.543060    6527 main.go:141] libmachine: STDERR: 
	I1209 16:57:21.543124    6527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/disk.qcow2 +20000M
	I1209 16:57:21.551936    6527 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:57:21.551957    6527 main.go:141] libmachine: STDERR: 
	I1209 16:57:21.551981    6527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/disk.qcow2
	I1209 16:57:21.551988    6527 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:57:21.552002    6527 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:57:21.552037    6527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:30:67:14:7e:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/disk.qcow2
	I1209 16:57:21.553999    6527 main.go:141] libmachine: STDOUT: 
	I1209 16:57:21.554014    6527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:57:21.554032    6527 client.go:171] duration metric: took 232.388834ms to LocalClient.Create
	I1209 16:57:23.556133    6527 start.go:128] duration metric: took 2.252574667s to createHost
	I1209 16:57:23.556166    6527 start.go:83] releasing machines lock for "custom-flannel-884000", held for 2.252645875s
	W1209 16:57:23.556209    6527 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:57:23.567204    6527 out.go:177] * Deleting "custom-flannel-884000" in qemu2 ...
	W1209 16:57:23.587819    6527 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:57:23.587838    6527 start.go:729] Will try again in 5 seconds ...
	I1209 16:57:28.590268    6527 start.go:360] acquireMachinesLock for custom-flannel-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:57:28.590840    6527 start.go:364] duration metric: took 437.25µs to acquireMachinesLock for "custom-flannel-884000"
	I1209 16:57:28.590970    6527 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:57:28.591260    6527 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:57:28.604014    6527 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:57:28.654431    6527 start.go:159] libmachine.API.Create for "custom-flannel-884000" (driver="qemu2")
	I1209 16:57:28.654490    6527 client.go:168] LocalClient.Create starting
	I1209 16:57:28.654665    6527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:57:28.654747    6527 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:28.654770    6527 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:28.654832    6527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:57:28.654890    6527 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:28.654913    6527 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:28.655673    6527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:57:28.836219    6527 main.go:141] libmachine: Creating SSH key...
	I1209 16:57:28.976362    6527 main.go:141] libmachine: Creating Disk image...
	I1209 16:57:28.976371    6527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:57:28.976613    6527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/disk.qcow2
	I1209 16:57:28.986789    6527 main.go:141] libmachine: STDOUT: 
	I1209 16:57:28.986816    6527 main.go:141] libmachine: STDERR: 
	I1209 16:57:28.986874    6527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/disk.qcow2 +20000M
	I1209 16:57:28.995447    6527 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:57:28.995462    6527 main.go:141] libmachine: STDERR: 
	I1209 16:57:28.995480    6527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/disk.qcow2
	I1209 16:57:28.995486    6527 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:57:28.995495    6527 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:57:28.995527    6527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:fe:32:c9:5e:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/custom-flannel-884000/disk.qcow2
	I1209 16:57:28.997392    6527 main.go:141] libmachine: STDOUT: 
	I1209 16:57:28.997405    6527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:57:28.997418    6527 client.go:171] duration metric: took 342.92225ms to LocalClient.Create
	I1209 16:57:30.999639    6527 start.go:128] duration metric: took 2.408319458s to createHost
	I1209 16:57:30.999715    6527 start.go:83] releasing machines lock for "custom-flannel-884000", held for 2.408851166s
	W1209 16:57:31.000112    6527 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:57:31.009782    6527 out.go:201] 
	W1209 16:57:31.021915    6527 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:57:31.021978    6527 out.go:270] * 
	* 
	W1209 16:57:31.024459    6527 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:57:31.031570    6527 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.804300709s)

                                                
                                                
-- stdout --
	* [calico-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-884000" primary control-plane node in "calico-884000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-884000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:57:33.648419    6654 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:57:33.648617    6654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:57:33.648623    6654 out.go:358] Setting ErrFile to fd 2...
	I1209 16:57:33.648625    6654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:57:33.648771    6654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:57:33.649978    6654 out.go:352] Setting JSON to false
	I1209 16:57:33.668172    6654 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5223,"bootTime":1733787030,"procs":538,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:57:33.668246    6654 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:57:33.673972    6654 out.go:177] * [calico-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:57:33.682893    6654 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:57:33.682951    6654 notify.go:220] Checking for updates...
	I1209 16:57:33.691853    6654 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:57:33.694808    6654 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:57:33.695991    6654 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:57:33.698847    6654 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:57:33.701854    6654 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:57:33.705234    6654 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:57:33.705306    6654 config.go:182] Loaded profile config "stopped-upgrade-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:57:33.705348    6654 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:57:33.709755    6654 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:57:33.716799    6654 start.go:297] selected driver: qemu2
	I1209 16:57:33.716806    6654 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:57:33.716815    6654 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:57:33.719353    6654 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:57:33.723784    6654 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:57:33.726890    6654 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:57:33.726904    6654 cni.go:84] Creating CNI manager for "calico"
	I1209 16:57:33.726908    6654 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1209 16:57:33.726941    6654 start.go:340] cluster config:
	{Name:calico-884000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:57:33.731731    6654 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:57:33.739816    6654 out.go:177] * Starting "calico-884000" primary control-plane node in "calico-884000" cluster
	I1209 16:57:33.743772    6654 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:57:33.743787    6654 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:57:33.743796    6654 cache.go:56] Caching tarball of preloaded images
	I1209 16:57:33.743869    6654 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:57:33.743874    6654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:57:33.743928    6654 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/calico-884000/config.json ...
	I1209 16:57:33.743939    6654 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/calico-884000/config.json: {Name:mkc7b065792383789a0cf9f561c1e1b95c369ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:57:33.744285    6654 start.go:360] acquireMachinesLock for calico-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:57:33.744334    6654 start.go:364] duration metric: took 42µs to acquireMachinesLock for "calico-884000"
	I1209 16:57:33.744355    6654 start.go:93] Provisioning new machine with config: &{Name:calico-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:57:33.744380    6654 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:57:33.748843    6654 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:57:33.763188    6654 start.go:159] libmachine.API.Create for "calico-884000" (driver="qemu2")
	I1209 16:57:33.763216    6654 client.go:168] LocalClient.Create starting
	I1209 16:57:33.763280    6654 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:57:33.763317    6654 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:33.763328    6654 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:33.763362    6654 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:57:33.763389    6654 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:33.763397    6654 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:33.763768    6654 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:57:33.933260    6654 main.go:141] libmachine: Creating SSH key...
	I1209 16:57:33.978406    6654 main.go:141] libmachine: Creating Disk image...
	I1209 16:57:33.978420    6654 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:57:33.978673    6654 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/disk.qcow2
	I1209 16:57:33.988695    6654 main.go:141] libmachine: STDOUT: 
	I1209 16:57:33.988717    6654 main.go:141] libmachine: STDERR: 
	I1209 16:57:33.988779    6654 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/disk.qcow2 +20000M
	I1209 16:57:33.997244    6654 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:57:33.997262    6654 main.go:141] libmachine: STDERR: 
	I1209 16:57:33.997284    6654 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/disk.qcow2
	I1209 16:57:33.997292    6654 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:57:33.997306    6654 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:57:33.997340    6654 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:85:d5:97:69:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/disk.qcow2
	I1209 16:57:33.999176    6654 main.go:141] libmachine: STDOUT: 
	I1209 16:57:33.999191    6654 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:57:33.999215    6654 client.go:171] duration metric: took 235.993166ms to LocalClient.Create
	I1209 16:57:36.001401    6654 start.go:128] duration metric: took 2.25699525s to createHost
	I1209 16:57:36.001474    6654 start.go:83] releasing machines lock for "calico-884000", held for 2.257134334s
	W1209 16:57:36.001548    6654 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:57:36.012270    6654 out.go:177] * Deleting "calico-884000" in qemu2 ...
	W1209 16:57:36.045665    6654 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:57:36.045688    6654 start.go:729] Will try again in 5 seconds ...
	I1209 16:57:41.047787    6654 start.go:360] acquireMachinesLock for calico-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:57:41.047934    6654 start.go:364] duration metric: took 118.166µs to acquireMachinesLock for "calico-884000"
	I1209 16:57:41.047949    6654 start.go:93] Provisioning new machine with config: &{Name:calico-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:57:41.048007    6654 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:57:41.059732    6654 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:57:41.075459    6654 start.go:159] libmachine.API.Create for "calico-884000" (driver="qemu2")
	I1209 16:57:41.075489    6654 client.go:168] LocalClient.Create starting
	I1209 16:57:41.075566    6654 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:57:41.075609    6654 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:41.075617    6654 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:41.075653    6654 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:57:41.075683    6654 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:41.075690    6654 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:41.076011    6654 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:57:41.245601    6654 main.go:141] libmachine: Creating SSH key...
	I1209 16:57:41.347065    6654 main.go:141] libmachine: Creating Disk image...
	I1209 16:57:41.347073    6654 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:57:41.347311    6654 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/disk.qcow2
	I1209 16:57:41.357996    6654 main.go:141] libmachine: STDOUT: 
	I1209 16:57:41.358020    6654 main.go:141] libmachine: STDERR: 
	I1209 16:57:41.358104    6654 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/disk.qcow2 +20000M
	I1209 16:57:41.367059    6654 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:57:41.367075    6654 main.go:141] libmachine: STDERR: 
	I1209 16:57:41.367091    6654 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/disk.qcow2
	I1209 16:57:41.367094    6654 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:57:41.367103    6654 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:57:41.367140    6654 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:b7:5e:24:ea:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/calico-884000/disk.qcow2
	I1209 16:57:41.369079    6654 main.go:141] libmachine: STDOUT: 
	I1209 16:57:41.369093    6654 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:57:41.369106    6654 client.go:171] duration metric: took 293.61175ms to LocalClient.Create
	I1209 16:57:43.371202    6654 start.go:128] duration metric: took 2.323188416s to createHost
	I1209 16:57:43.371228    6654 start.go:83] releasing machines lock for "calico-884000", held for 2.323290083s
	W1209 16:57:43.371342    6654 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:57:43.389751    6654 out.go:201] 
	W1209 16:57:43.392801    6654 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:57:43.392807    6654 out.go:270] * 
	* 
	W1209 16:57:43.393262    6654 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:57:43.410790    6654 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-884000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.889087s)

                                                
                                                
-- stdout --
	* [false-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-884000" primary control-plane node in "false-884000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-884000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:57:45.987391    6779 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:57:45.987540    6779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:57:45.987543    6779 out.go:358] Setting ErrFile to fd 2...
	I1209 16:57:45.987545    6779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:57:45.987670    6779 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:57:45.988855    6779 out.go:352] Setting JSON to false
	I1209 16:57:46.006896    6779 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5235,"bootTime":1733787030,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:57:46.006969    6779 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:57:46.013808    6779 out.go:177] * [false-884000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:57:46.020813    6779 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:57:46.020911    6779 notify.go:220] Checking for updates...
	I1209 16:57:46.029751    6779 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:57:46.032804    6779 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:57:46.035801    6779 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:57:46.038748    6779 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:57:46.041759    6779 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:57:46.045085    6779 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:57:46.045155    6779 config.go:182] Loaded profile config "stopped-upgrade-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:57:46.045200    6779 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:57:46.049719    6779 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:57:46.056810    6779 start.go:297] selected driver: qemu2
	I1209 16:57:46.056816    6779 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:57:46.056825    6779 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:57:46.059260    6779 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:57:46.063735    6779 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:57:46.066804    6779 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:57:46.066821    6779 cni.go:84] Creating CNI manager for "false"
	I1209 16:57:46.066844    6779 start.go:340] cluster config:
	{Name:false-884000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:false-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:57:46.071309    6779 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:57:46.078800    6779 out.go:177] * Starting "false-884000" primary control-plane node in "false-884000" cluster
	I1209 16:57:46.082804    6779 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:57:46.082820    6779 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:57:46.082840    6779 cache.go:56] Caching tarball of preloaded images
	I1209 16:57:46.082922    6779 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:57:46.082928    6779 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:57:46.082991    6779 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/false-884000/config.json ...
	I1209 16:57:46.083002    6779 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/false-884000/config.json: {Name:mke3ac181c55ad91e4eb26fd7397514243ed8ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:57:46.083451    6779 start.go:360] acquireMachinesLock for false-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:57:46.083498    6779 start.go:364] duration metric: took 41.542µs to acquireMachinesLock for "false-884000"
	I1209 16:57:46.083510    6779 start.go:93] Provisioning new machine with config: &{Name:false-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:57:46.083536    6779 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:57:46.092747    6779 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:57:46.109207    6779 start.go:159] libmachine.API.Create for "false-884000" (driver="qemu2")
	I1209 16:57:46.109235    6779 client.go:168] LocalClient.Create starting
	I1209 16:57:46.109312    6779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:57:46.109348    6779 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:46.109360    6779 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:46.109400    6779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:57:46.109460    6779 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:46.109467    6779 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:46.109913    6779 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:57:46.278210    6779 main.go:141] libmachine: Creating SSH key...
	I1209 16:57:46.363531    6779 main.go:141] libmachine: Creating Disk image...
	I1209 16:57:46.363538    6779 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:57:46.363765    6779 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/disk.qcow2
	I1209 16:57:46.373616    6779 main.go:141] libmachine: STDOUT: 
	I1209 16:57:46.373636    6779 main.go:141] libmachine: STDERR: 
	I1209 16:57:46.373701    6779 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/disk.qcow2 +20000M
	I1209 16:57:46.382411    6779 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:57:46.382427    6779 main.go:141] libmachine: STDERR: 
	I1209 16:57:46.382440    6779 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/disk.qcow2
	I1209 16:57:46.382445    6779 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:57:46.382455    6779 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:57:46.382492    6779 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:fa:93:cd:ba:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/disk.qcow2
	I1209 16:57:46.384325    6779 main.go:141] libmachine: STDOUT: 
	I1209 16:57:46.384340    6779 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:57:46.384358    6779 client.go:171] duration metric: took 275.11775ms to LocalClient.Create
	I1209 16:57:48.386448    6779 start.go:128] duration metric: took 2.302902541s to createHost
	I1209 16:57:48.386504    6779 start.go:83] releasing machines lock for "false-884000", held for 2.303001583s
	W1209 16:57:48.386537    6779 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:57:48.403448    6779 out.go:177] * Deleting "false-884000" in qemu2 ...
	W1209 16:57:48.422444    6779 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:57:48.422451    6779 start.go:729] Will try again in 5 seconds ...
	I1209 16:57:53.424743    6779 start.go:360] acquireMachinesLock for false-884000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:57:53.425271    6779 start.go:364] duration metric: took 424.125µs to acquireMachinesLock for "false-884000"
	I1209 16:57:53.425327    6779 start.go:93] Provisioning new machine with config: &{Name:false-884000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-884000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:57:53.425601    6779 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:57:53.435248    6779 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1209 16:57:53.470336    6779 start.go:159] libmachine.API.Create for "false-884000" (driver="qemu2")
	I1209 16:57:53.470401    6779 client.go:168] LocalClient.Create starting
	I1209 16:57:53.470574    6779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:57:53.470662    6779 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:53.470677    6779 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:53.470739    6779 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:57:53.470788    6779 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:53.470801    6779 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:53.471361    6779 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:57:53.643348    6779 main.go:141] libmachine: Creating SSH key...
	I1209 16:57:53.777552    6779 main.go:141] libmachine: Creating Disk image...
	I1209 16:57:53.777560    6779 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:57:53.777819    6779 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/disk.qcow2
	I1209 16:57:53.787986    6779 main.go:141] libmachine: STDOUT: 
	I1209 16:57:53.788014    6779 main.go:141] libmachine: STDERR: 
	I1209 16:57:53.788086    6779 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/disk.qcow2 +20000M
	I1209 16:57:53.797003    6779 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:57:53.797018    6779 main.go:141] libmachine: STDERR: 
	I1209 16:57:53.797036    6779 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/disk.qcow2
	I1209 16:57:53.797041    6779 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:57:53.797047    6779 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:57:53.797085    6779 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:12:83:c3:e4:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/false-884000/disk.qcow2
	I1209 16:57:53.799006    6779 main.go:141] libmachine: STDOUT: 
	I1209 16:57:53.799019    6779 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:57:53.799031    6779 client.go:171] duration metric: took 328.612791ms to LocalClient.Create
	I1209 16:57:55.801148    6779 start.go:128] duration metric: took 2.37552775s to createHost
	I1209 16:57:55.801212    6779 start.go:83] releasing machines lock for "false-884000", held for 2.375924792s
	W1209 16:57:55.801395    6779 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-884000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:57:55.810489    6779 out.go:201] 
	W1209 16:57:55.821519    6779 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:57:55.821543    6779 out.go:270] * 
	* 
	W1209 16:57:55.823196    6779 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:57:55.835470    6779 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-493000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
E1209 16:58:03.013673    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-493000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.948350709s)

                                                
                                                
-- stdout --
	* [old-k8s-version-493000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-493000" primary control-plane node in "old-k8s-version-493000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-493000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:57:58.197110    6897 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:57:58.197274    6897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:57:58.197277    6897 out.go:358] Setting ErrFile to fd 2...
	I1209 16:57:58.197279    6897 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:57:58.197416    6897 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:57:58.198656    6897 out.go:352] Setting JSON to false
	I1209 16:57:58.217138    6897 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5248,"bootTime":1733787030,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:57:58.217215    6897 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:57:58.222546    6897 out.go:177] * [old-k8s-version-493000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:57:58.230608    6897 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:57:58.230693    6897 notify.go:220] Checking for updates...
	I1209 16:57:58.236570    6897 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:57:58.239567    6897 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:57:58.242516    6897 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:57:58.245551    6897 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:57:58.248540    6897 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:57:58.251964    6897 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:57:58.252044    6897 config.go:182] Loaded profile config "stopped-upgrade-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:57:58.252113    6897 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:57:58.256507    6897 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:57:58.263533    6897 start.go:297] selected driver: qemu2
	I1209 16:57:58.263540    6897 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:57:58.263548    6897 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:57:58.266017    6897 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:57:58.270584    6897 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:57:58.273662    6897 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:57:58.273680    6897 cni.go:84] Creating CNI manager for ""
	I1209 16:57:58.273702    6897 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1209 16:57:58.273726    6897 start.go:340] cluster config:
	{Name:old-k8s-version-493000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-493000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:57:58.278185    6897 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:57:58.286505    6897 out.go:177] * Starting "old-k8s-version-493000" primary control-plane node in "old-k8s-version-493000" cluster
	I1209 16:57:58.289559    6897 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 16:57:58.289585    6897 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1209 16:57:58.289596    6897 cache.go:56] Caching tarball of preloaded images
	I1209 16:57:58.289681    6897 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:57:58.289687    6897 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1209 16:57:58.289750    6897 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/old-k8s-version-493000/config.json ...
	I1209 16:57:58.289761    6897 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/old-k8s-version-493000/config.json: {Name:mk36e469daa7de3df0325dc806975cb212239ce0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:57:58.290188    6897 start.go:360] acquireMachinesLock for old-k8s-version-493000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:57:58.290236    6897 start.go:364] duration metric: took 41.583µs to acquireMachinesLock for "old-k8s-version-493000"
	I1209 16:57:58.290248    6897 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-493000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-493000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:57:58.290275    6897 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:57:58.297421    6897 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 16:57:58.313317    6897 start.go:159] libmachine.API.Create for "old-k8s-version-493000" (driver="qemu2")
	I1209 16:57:58.313347    6897 client.go:168] LocalClient.Create starting
	I1209 16:57:58.313423    6897 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:57:58.313458    6897 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:58.313470    6897 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:58.313508    6897 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:57:58.313537    6897 main.go:141] libmachine: Decoding PEM data...
	I1209 16:57:58.313543    6897 main.go:141] libmachine: Parsing certificate...
	I1209 16:57:58.313953    6897 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:57:58.480329    6897 main.go:141] libmachine: Creating SSH key...
	I1209 16:57:58.531647    6897 main.go:141] libmachine: Creating Disk image...
	I1209 16:57:58.531653    6897 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:57:58.531881    6897 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/disk.qcow2
	I1209 16:57:58.541755    6897 main.go:141] libmachine: STDOUT: 
	I1209 16:57:58.541776    6897 main.go:141] libmachine: STDERR: 
	I1209 16:57:58.541838    6897 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/disk.qcow2 +20000M
	I1209 16:57:58.550656    6897 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:57:58.550670    6897 main.go:141] libmachine: STDERR: 
	I1209 16:57:58.550694    6897 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/disk.qcow2
	I1209 16:57:58.550700    6897 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:57:58.550712    6897 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:57:58.550746    6897 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:34:94:39:12:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/disk.qcow2
	I1209 16:57:58.552733    6897 main.go:141] libmachine: STDOUT: 
	I1209 16:57:58.552745    6897 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:57:58.552765    6897 client.go:171] duration metric: took 239.411625ms to LocalClient.Create
	I1209 16:58:00.554971    6897 start.go:128] duration metric: took 2.264670375s to createHost
	I1209 16:58:00.555069    6897 start.go:83] releasing machines lock for "old-k8s-version-493000", held for 2.2648245s
	W1209 16:58:00.555124    6897 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:00.565133    6897 out.go:177] * Deleting "old-k8s-version-493000" in qemu2 ...
	W1209 16:58:00.603403    6897 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:00.603429    6897 start.go:729] Will try again in 5 seconds ...
	I1209 16:58:05.605583    6897 start.go:360] acquireMachinesLock for old-k8s-version-493000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:05.605689    6897 start.go:364] duration metric: took 91.584µs to acquireMachinesLock for "old-k8s-version-493000"
	I1209 16:58:05.605702    6897 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-493000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-493000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:58:05.605757    6897 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:58:05.614497    6897 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 16:58:05.629642    6897 start.go:159] libmachine.API.Create for "old-k8s-version-493000" (driver="qemu2")
	I1209 16:58:05.629673    6897 client.go:168] LocalClient.Create starting
	I1209 16:58:05.629745    6897 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:58:05.629794    6897 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:05.629805    6897 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:05.629837    6897 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:58:05.629866    6897 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:05.629873    6897 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:05.630168    6897 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:58:05.971383    6897 main.go:141] libmachine: Creating SSH key...
	I1209 16:58:06.046608    6897 main.go:141] libmachine: Creating Disk image...
	I1209 16:58:06.046620    6897 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:58:06.046859    6897 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/disk.qcow2
	I1209 16:58:06.057283    6897 main.go:141] libmachine: STDOUT: 
	I1209 16:58:06.057310    6897 main.go:141] libmachine: STDERR: 
	I1209 16:58:06.057380    6897 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/disk.qcow2 +20000M
	I1209 16:58:06.066222    6897 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:58:06.066240    6897 main.go:141] libmachine: STDERR: 
	I1209 16:58:06.066255    6897 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/disk.qcow2
	I1209 16:58:06.066260    6897 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:58:06.066268    6897 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:06.066294    6897 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:78:d2:6e:93:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/disk.qcow2
	I1209 16:58:06.068312    6897 main.go:141] libmachine: STDOUT: 
	I1209 16:58:06.068327    6897 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:06.068341    6897 client.go:171] duration metric: took 438.662916ms to LocalClient.Create
	I1209 16:58:08.070546    6897 start.go:128] duration metric: took 2.464753958s to createHost
	I1209 16:58:08.070651    6897 start.go:83] releasing machines lock for "old-k8s-version-493000", held for 2.46495325s
	W1209 16:58:08.071192    6897 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-493000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-493000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:08.081869    6897 out.go:201] 
	W1209 16:58:08.085963    6897 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:58:08.086003    6897 out.go:270] * 
	* 
	W1209 16:58:08.088131    6897 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:58:08.098913    6897 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-493000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000: exit status 7 (69.632042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-493000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-493000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-493000 create -f testdata/busybox.yaml: exit status 1 (29.7825ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-493000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-493000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000: exit status 7 (33.478875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-493000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000: exit status 7 (33.712542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-493000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-493000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-493000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-493000 describe deploy/metrics-server -n kube-system: exit status 1 (28.232834ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-493000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-493000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000: exit status 7 (33.787958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-493000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-493000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-493000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.187376625s)

                                                
                                                
-- stdout --
	* [old-k8s-version-493000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-493000" primary control-plane node in "old-k8s-version-493000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-493000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-493000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:58:12.125084    6959 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:58:12.125240    6959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:12.125243    6959 out.go:358] Setting ErrFile to fd 2...
	I1209 16:58:12.125246    6959 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:12.125365    6959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:58:12.126479    6959 out.go:352] Setting JSON to false
	I1209 16:58:12.144379    6959 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5262,"bootTime":1733787030,"procs":540,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:58:12.144444    6959 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:58:12.149011    6959 out.go:177] * [old-k8s-version-493000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:58:12.156161    6959 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:58:12.156219    6959 notify.go:220] Checking for updates...
	I1209 16:58:12.163082    6959 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:58:12.164441    6959 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:58:12.167067    6959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:58:12.170040    6959 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:58:12.173138    6959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:58:12.176405    6959 config.go:182] Loaded profile config "old-k8s-version-493000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1209 16:58:12.180025    6959 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1209 16:58:12.183054    6959 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:58:12.186961    6959 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 16:58:12.194040    6959 start.go:297] selected driver: qemu2
	I1209 16:58:12.194046    6959 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-493000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-493000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:58:12.194091    6959 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:58:12.196620    6959 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:58:12.196642    6959 cni.go:84] Creating CNI manager for ""
	I1209 16:58:12.196663    6959 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1209 16:58:12.196686    6959 start.go:340] cluster config:
	{Name:old-k8s-version-493000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-493000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:58:12.201109    6959 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:12.209164    6959 out.go:177] * Starting "old-k8s-version-493000" primary control-plane node in "old-k8s-version-493000" cluster
	I1209 16:58:12.212119    6959 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 16:58:12.212139    6959 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1209 16:58:12.212149    6959 cache.go:56] Caching tarball of preloaded images
	I1209 16:58:12.212236    6959 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:58:12.212241    6959 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1209 16:58:12.212291    6959 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/old-k8s-version-493000/config.json ...
	I1209 16:58:12.212754    6959 start.go:360] acquireMachinesLock for old-k8s-version-493000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:12.212782    6959 start.go:364] duration metric: took 22.25µs to acquireMachinesLock for "old-k8s-version-493000"
	I1209 16:58:12.212790    6959 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:58:12.212795    6959 fix.go:54] fixHost starting: 
	I1209 16:58:12.212901    6959 fix.go:112] recreateIfNeeded on old-k8s-version-493000: state=Stopped err=<nil>
	W1209 16:58:12.212909    6959 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:58:12.216066    6959 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-493000" ...
	I1209 16:58:12.224023    6959 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:12.224058    6959 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:78:d2:6e:93:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/disk.qcow2
	I1209 16:58:12.226138    6959 main.go:141] libmachine: STDOUT: 
	I1209 16:58:12.226162    6959 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:12.226191    6959 fix.go:56] duration metric: took 13.394875ms for fixHost
	I1209 16:58:12.226194    6959 start.go:83] releasing machines lock for "old-k8s-version-493000", held for 13.408666ms
	W1209 16:58:12.226199    6959 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:58:12.226234    6959 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:12.226238    6959 start.go:729] Will try again in 5 seconds ...
	I1209 16:58:17.228492    6959 start.go:360] acquireMachinesLock for old-k8s-version-493000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:17.228910    6959 start.go:364] duration metric: took 319.666µs to acquireMachinesLock for "old-k8s-version-493000"
	I1209 16:58:17.228976    6959 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:58:17.228991    6959 fix.go:54] fixHost starting: 
	I1209 16:58:17.229539    6959 fix.go:112] recreateIfNeeded on old-k8s-version-493000: state=Stopped err=<nil>
	W1209 16:58:17.229561    6959 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:58:17.237242    6959 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-493000" ...
	I1209 16:58:17.240318    6959 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:17.240496    6959 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:78:d2:6e:93:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/old-k8s-version-493000/disk.qcow2
	I1209 16:58:17.249061    6959 main.go:141] libmachine: STDOUT: 
	I1209 16:58:17.249120    6959 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:17.249220    6959 fix.go:56] duration metric: took 20.229125ms for fixHost
	I1209 16:58:17.249234    6959 start.go:83] releasing machines lock for "old-k8s-version-493000", held for 20.305417ms
	W1209 16:58:17.249412    6959 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-493000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-493000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:17.256248    6959 out.go:201] 
	W1209 16:58:17.260368    6959 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:58:17.260395    6959 out.go:270] * 
	* 
	W1209 16:58:17.261941    6959 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:58:17.271171    6959 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-493000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000: exit status 7 (53.747708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-493000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-493000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000: exit status 7 (34.150333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-493000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-493000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-493000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-493000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.157208ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-493000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-493000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000: exit status 7 (33.198125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-493000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-493000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000: exit status 7 (33.7865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-493000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-493000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-493000 --alsologtostderr -v=1: exit status 83 (46.381709ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-493000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-493000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:58:17.542645    6982 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:58:17.543712    6982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:17.543717    6982 out.go:358] Setting ErrFile to fd 2...
	I1209 16:58:17.543720    6982 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:17.543892    6982 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:58:17.544111    6982 out.go:352] Setting JSON to false
	I1209 16:58:17.544119    6982 mustload.go:65] Loading cluster: old-k8s-version-493000
	I1209 16:58:17.544341    6982 config.go:182] Loaded profile config "old-k8s-version-493000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1209 16:58:17.549012    6982 out.go:177] * The control-plane node old-k8s-version-493000 host is not running: state=Stopped
	I1209 16:58:17.551947    6982 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-493000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-493000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000: exit status 7 (34.800583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-493000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000: exit status 7 (33.721084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-493000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.906279583s)

                                                
                                                
-- stdout --
	* [no-preload-558000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-558000" primary control-plane node in "no-preload-558000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-558000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:58:18.008448    7006 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:58:18.008603    7006 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:18.008607    7006 out.go:358] Setting ErrFile to fd 2...
	I1209 16:58:18.008609    7006 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:18.008734    7006 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:58:18.009920    7006 out.go:352] Setting JSON to false
	I1209 16:58:18.028299    7006 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5268,"bootTime":1733787030,"procs":537,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:58:18.028366    7006 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:58:18.033672    7006 out.go:177] * [no-preload-558000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:58:18.039698    7006 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:58:18.039752    7006 notify.go:220] Checking for updates...
	I1209 16:58:18.046501    7006 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:58:18.049615    7006 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:58:18.052709    7006 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:58:18.055618    7006 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:58:18.058621    7006 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:58:18.061946    7006 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:58:18.062014    7006 config.go:182] Loaded profile config "stopped-upgrade-632000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1209 16:58:18.062063    7006 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:58:18.065662    7006 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:58:18.072689    7006 start.go:297] selected driver: qemu2
	I1209 16:58:18.072698    7006 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:58:18.072706    7006 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:58:18.075139    7006 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:58:18.077584    7006 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:58:18.081648    7006 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:58:18.081665    7006 cni.go:84] Creating CNI manager for ""
	I1209 16:58:18.081688    7006 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:58:18.081692    7006 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 16:58:18.081727    7006 start.go:340] cluster config:
	{Name:no-preload-558000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-558000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:58:18.086282    7006 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:18.094635    7006 out.go:177] * Starting "no-preload-558000" primary control-plane node in "no-preload-558000" cluster
	I1209 16:58:18.098629    7006 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:58:18.098711    7006 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/no-preload-558000/config.json ...
	I1209 16:58:18.098731    7006 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/no-preload-558000/config.json: {Name:mk4a687647f24354e8b3befd6eb53aaf7eb4aa37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:58:18.098734    7006 cache.go:107] acquiring lock: {Name:mkc92f5b3033bc49eb857fe8afc652e5483485ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:18.098742    7006 cache.go:107] acquiring lock: {Name:mk3e6e07b140be9dfe99d9e3684974fc4c0073ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:18.098760    7006 cache.go:107] acquiring lock: {Name:mkc0d742c11a5f0e2818b1813d085fc38ccc568e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:18.098848    7006 cache.go:115] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1209 16:58:18.098856    7006 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 124.334µs
	I1209 16:58:18.098863    7006 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1209 16:58:18.098733    7006 cache.go:107] acquiring lock: {Name:mk7b0270decfe460dea90d22d72bbcacdd41b330 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:18.098885    7006 cache.go:107] acquiring lock: {Name:mk407f771861a38e0b493f2fb2c684d1ec0071f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:18.098949    7006 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1209 16:58:18.098953    7006 cache.go:107] acquiring lock: {Name:mka816f1d818910c192413c39f78afc400907907 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:18.099006    7006 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 16:58:18.098990    7006 cache.go:107] acquiring lock: {Name:mk0e779ac6e1c0f6b0916f1a966fb5687951fbcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:18.099007    7006 cache.go:107] acquiring lock: {Name:mk47649e1c8f141818f0904548861df40ddc9447 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:18.098951    7006 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 16:58:18.099195    7006 start.go:360] acquireMachinesLock for no-preload-558000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:18.099317    7006 start.go:364] duration metric: took 115.792µs to acquireMachinesLock for "no-preload-558000"
	I1209 16:58:18.099320    7006 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 16:58:18.099327    7006 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 16:58:18.099332    7006 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1209 16:58:18.099328    7006 start.go:93] Provisioning new machine with config: &{Name:no-preload-558000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-558000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:58:18.099356    7006 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:58:18.099403    7006 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 16:58:18.107477    7006 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 16:58:18.110416    7006 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1209 16:58:18.111424    7006 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1209 16:58:18.111765    7006 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1209 16:58:18.111799    7006 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1209 16:58:18.111816    7006 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1209 16:58:18.111822    7006 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1209 16:58:18.113616    7006 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1209 16:58:18.123892    7006 start.go:159] libmachine.API.Create for "no-preload-558000" (driver="qemu2")
	I1209 16:58:18.123915    7006 client.go:168] LocalClient.Create starting
	I1209 16:58:18.123999    7006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:58:18.124039    7006 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:18.124050    7006 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:18.124086    7006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:58:18.124120    7006 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:18.124127    7006 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:18.124517    7006 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:58:18.300131    7006 main.go:141] libmachine: Creating SSH key...
	I1209 16:58:18.423294    7006 main.go:141] libmachine: Creating Disk image...
	I1209 16:58:18.423317    7006 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:58:18.423580    7006 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/disk.qcow2
	I1209 16:58:18.433805    7006 main.go:141] libmachine: STDOUT: 
	I1209 16:58:18.433826    7006 main.go:141] libmachine: STDERR: 
	I1209 16:58:18.433889    7006 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/disk.qcow2 +20000M
	I1209 16:58:18.442946    7006 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:58:18.442962    7006 main.go:141] libmachine: STDERR: 
	I1209 16:58:18.442971    7006 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/disk.qcow2
	I1209 16:58:18.442977    7006 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:58:18.442990    7006 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:18.443015    7006 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:3d:3e:43:ef:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/disk.qcow2
	I1209 16:58:18.445042    7006 main.go:141] libmachine: STDOUT: 
	I1209 16:58:18.445059    7006 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:18.445081    7006 client.go:171] duration metric: took 321.163167ms to LocalClient.Create
	I1209 16:58:18.550035    7006 cache.go:162] opening:  /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1209 16:58:18.606226    7006 cache.go:162] opening:  /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1209 16:58:18.611661    7006 cache.go:162] opening:  /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2
	I1209 16:58:18.670037    7006 cache.go:162] opening:  /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1209 16:58:18.761898    7006 cache.go:162] opening:  /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2
	I1209 16:58:18.808101    7006 cache.go:157] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1209 16:58:18.808117    7006 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 709.357458ms
	I1209 16:58:18.808132    7006 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1209 16:58:18.825524    7006 cache.go:162] opening:  /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1209 16:58:18.857853    7006 cache.go:162] opening:  /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2
	I1209 16:58:20.445378    7006 start.go:128] duration metric: took 2.345995416s to createHost
	I1209 16:58:20.445461    7006 start.go:83] releasing machines lock for "no-preload-558000", held for 2.346137833s
	W1209 16:58:20.445524    7006 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:20.461927    7006 out.go:177] * Deleting "no-preload-558000" in qemu2 ...
	W1209 16:58:20.499876    7006 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:20.499966    7006 start.go:729] Will try again in 5 seconds ...
	I1209 16:58:22.665076    7006 cache.go:157] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1209 16:58:22.665184    7006 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 4.566317209s
	I1209 16:58:22.665236    7006 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1209 16:58:23.130768    7006 cache.go:157] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1209 16:58:23.130818    7006 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 5.03183525s
	I1209 16:58:23.130843    7006 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1209 16:58:23.457905    7006 cache.go:157] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1209 16:58:23.457972    7006 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 5.359034917s
	I1209 16:58:23.458001    7006 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1209 16:58:24.076183    7006 cache.go:157] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1209 16:58:24.076234    7006 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 5.977506792s
	I1209 16:58:24.076260    7006 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1209 16:58:24.575008    7006 cache.go:157] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1209 16:58:24.575082    7006 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 6.476348333s
	I1209 16:58:24.575126    7006 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1209 16:58:25.500242    7006 start.go:360] acquireMachinesLock for no-preload-558000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:25.500768    7006 start.go:364] duration metric: took 432.208µs to acquireMachinesLock for "no-preload-558000"
	I1209 16:58:25.500905    7006 start.go:93] Provisioning new machine with config: &{Name:no-preload-558000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-558000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:58:25.501210    7006 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:58:25.511948    7006 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 16:58:25.561246    7006 start.go:159] libmachine.API.Create for "no-preload-558000" (driver="qemu2")
	I1209 16:58:25.561323    7006 client.go:168] LocalClient.Create starting
	I1209 16:58:25.561554    7006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:58:25.561654    7006 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:25.561680    7006 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:25.561768    7006 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:58:25.561825    7006 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:25.561835    7006 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:25.562428    7006 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:58:25.731225    7006 main.go:141] libmachine: Creating SSH key...
	I1209 16:58:25.812164    7006 main.go:141] libmachine: Creating Disk image...
	I1209 16:58:25.812170    7006 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:58:25.812409    7006 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/disk.qcow2
	I1209 16:58:25.822467    7006 main.go:141] libmachine: STDOUT: 
	I1209 16:58:25.822487    7006 main.go:141] libmachine: STDERR: 
	I1209 16:58:25.822549    7006 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/disk.qcow2 +20000M
	I1209 16:58:25.831167    7006 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:58:25.831184    7006 main.go:141] libmachine: STDERR: 
	I1209 16:58:25.831196    7006 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/disk.qcow2
	I1209 16:58:25.831203    7006 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:58:25.831211    7006 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:25.831244    7006 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:fe:81:fa:ba:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/disk.qcow2
	I1209 16:58:25.833144    7006 main.go:141] libmachine: STDOUT: 
	I1209 16:58:25.833164    7006 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:25.833180    7006 client.go:171] duration metric: took 271.83925ms to LocalClient.Create
	I1209 16:58:27.833801    7006 start.go:128] duration metric: took 2.332493625s to createHost
	I1209 16:58:27.833866    7006 start.go:83] releasing machines lock for "no-preload-558000", held for 2.333075625s
	W1209 16:58:27.834116    7006 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-558000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-558000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:27.845603    7006 out.go:201] 
	W1209 16:58:27.855783    7006 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:58:27.855830    7006 out.go:270] * 
	* 
	W1209 16:58:27.857428    7006 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:58:27.868540    7006 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (67.880667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-877000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-877000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.999544709s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-877000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-877000" primary control-plane node in "default-k8s-diff-port-877000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-877000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:58:21.769619    7049 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:58:21.769758    7049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:21.769761    7049 out.go:358] Setting ErrFile to fd 2...
	I1209 16:58:21.769764    7049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:21.769916    7049 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:58:21.771033    7049 out.go:352] Setting JSON to false
	I1209 16:58:21.790364    7049 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5271,"bootTime":1733787030,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:58:21.790435    7049 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:58:21.794694    7049 out.go:177] * [default-k8s-diff-port-877000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:58:21.802716    7049 notify.go:220] Checking for updates...
	I1209 16:58:21.806604    7049 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:58:21.813606    7049 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:58:21.816595    7049 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:58:21.820599    7049 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:58:21.827545    7049 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:58:21.837307    7049 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:58:21.842113    7049 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:58:21.842200    7049 config.go:182] Loaded profile config "no-preload-558000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:58:21.842266    7049 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:58:21.846576    7049 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:58:21.851618    7049 start.go:297] selected driver: qemu2
	I1209 16:58:21.851625    7049 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:58:21.851635    7049 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:58:21.854410    7049 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:58:21.857663    7049 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:58:21.860676    7049 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:58:21.860711    7049 cni.go:84] Creating CNI manager for ""
	I1209 16:58:21.860730    7049 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:58:21.860735    7049 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 16:58:21.860759    7049 start.go:340] cluster config:
	{Name:default-k8s-diff-port-877000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:58:21.865770    7049 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:21.868647    7049 out.go:177] * Starting "default-k8s-diff-port-877000" primary control-plane node in "default-k8s-diff-port-877000" cluster
	I1209 16:58:21.876648    7049 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:58:21.876676    7049 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:58:21.876690    7049 cache.go:56] Caching tarball of preloaded images
	I1209 16:58:21.876782    7049 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:58:21.876791    7049 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:58:21.876857    7049 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/default-k8s-diff-port-877000/config.json ...
	I1209 16:58:21.876869    7049 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/default-k8s-diff-port-877000/config.json: {Name:mk771484b77936198b32494e12fa346d405a2ee9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:58:21.877138    7049 start.go:360] acquireMachinesLock for default-k8s-diff-port-877000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:21.877190    7049 start.go:364] duration metric: took 42.042µs to acquireMachinesLock for "default-k8s-diff-port-877000"
	I1209 16:58:21.877202    7049 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:58:21.877231    7049 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:58:21.881606    7049 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 16:58:21.899028    7049 start.go:159] libmachine.API.Create for "default-k8s-diff-port-877000" (driver="qemu2")
	I1209 16:58:21.899053    7049 client.go:168] LocalClient.Create starting
	I1209 16:58:21.899127    7049 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:58:21.899167    7049 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:21.899178    7049 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:21.899217    7049 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:58:21.899248    7049 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:21.899261    7049 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:21.899637    7049 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:58:22.059679    7049 main.go:141] libmachine: Creating SSH key...
	I1209 16:58:22.262812    7049 main.go:141] libmachine: Creating Disk image...
	I1209 16:58:22.262820    7049 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:58:22.263086    7049 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/disk.qcow2
	I1209 16:58:22.273346    7049 main.go:141] libmachine: STDOUT: 
	I1209 16:58:22.273362    7049 main.go:141] libmachine: STDERR: 
	I1209 16:58:22.273428    7049 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/disk.qcow2 +20000M
	I1209 16:58:22.282152    7049 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:58:22.282166    7049 main.go:141] libmachine: STDERR: 
	I1209 16:58:22.282179    7049 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/disk.qcow2
	I1209 16:58:22.282185    7049 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:58:22.282198    7049 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:22.282240    7049 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:65:85:14:23:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/disk.qcow2
	I1209 16:58:22.284115    7049 main.go:141] libmachine: STDOUT: 
	I1209 16:58:22.284128    7049 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:22.284146    7049 client.go:171] duration metric: took 385.085833ms to LocalClient.Create
	I1209 16:58:24.286427    7049 start.go:128] duration metric: took 2.409174416s to createHost
	I1209 16:58:24.286494    7049 start.go:83] releasing machines lock for "default-k8s-diff-port-877000", held for 2.4092955s
	W1209 16:58:24.286598    7049 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:24.294880    7049 out.go:177] * Deleting "default-k8s-diff-port-877000" in qemu2 ...
	W1209 16:58:24.329207    7049 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:24.329244    7049 start.go:729] Will try again in 5 seconds ...
	I1209 16:58:29.329620    7049 start.go:360] acquireMachinesLock for default-k8s-diff-port-877000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:29.330107    7049 start.go:364] duration metric: took 315.792µs to acquireMachinesLock for "default-k8s-diff-port-877000"
	I1209 16:58:29.330259    7049 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:58:29.330570    7049 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:58:29.340227    7049 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 16:58:29.388995    7049 start.go:159] libmachine.API.Create for "default-k8s-diff-port-877000" (driver="qemu2")
	I1209 16:58:29.389049    7049 client.go:168] LocalClient.Create starting
	I1209 16:58:29.389165    7049 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:58:29.389223    7049 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:29.389242    7049 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:29.389315    7049 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:58:29.389355    7049 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:29.389367    7049 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:29.390099    7049 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:58:29.580969    7049 main.go:141] libmachine: Creating SSH key...
	I1209 16:58:29.668125    7049 main.go:141] libmachine: Creating Disk image...
	I1209 16:58:29.668131    7049 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:58:29.668363    7049 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/disk.qcow2
	I1209 16:58:29.678298    7049 main.go:141] libmachine: STDOUT: 
	I1209 16:58:29.678316    7049 main.go:141] libmachine: STDERR: 
	I1209 16:58:29.678369    7049 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/disk.qcow2 +20000M
	I1209 16:58:29.686927    7049 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:58:29.686943    7049 main.go:141] libmachine: STDERR: 
	I1209 16:58:29.686955    7049 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/disk.qcow2
	I1209 16:58:29.686968    7049 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:58:29.686979    7049 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:29.687003    7049 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:a3:de:be:64:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/disk.qcow2
	I1209 16:58:29.688832    7049 main.go:141] libmachine: STDOUT: 
	I1209 16:58:29.688846    7049 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:29.688859    7049 client.go:171] duration metric: took 299.804375ms to LocalClient.Create
	I1209 16:58:31.691048    7049 start.go:128] duration metric: took 2.360440084s to createHost
	I1209 16:58:31.691099    7049 start.go:83] releasing machines lock for "default-k8s-diff-port-877000", held for 2.360968042s
	W1209 16:58:31.691662    7049 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-877000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-877000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:31.701327    7049 out.go:201] 
	W1209 16:58:31.704400    7049 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:58:31.704430    7049 out.go:270] * 
	* 
	W1209 16:58:31.707304    7049 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:58:31.720354    7049 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-877000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000: exit status 7 (66.386583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-558000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-558000 create -f testdata/busybox.yaml: exit status 1 (29.501458ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-558000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-558000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (32.95875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (32.982208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-558000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-558000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-558000 describe deploy/metrics-server -n kube-system: exit status 1 (27.419458ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-558000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-558000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (34.106583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.615221125s)

                                                
                                                
-- stdout --
	* [no-preload-558000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-558000" primary control-plane node in "no-preload-558000" cluster
	* Restarting existing qemu2 VM for "no-preload-558000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-558000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:58:31.200183    7107 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:58:31.200352    7107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:31.200355    7107 out.go:358] Setting ErrFile to fd 2...
	I1209 16:58:31.200358    7107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:31.200508    7107 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:58:31.201578    7107 out.go:352] Setting JSON to false
	I1209 16:58:31.219266    7107 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5281,"bootTime":1733787030,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:58:31.219340    7107 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:58:31.223681    7107 out.go:177] * [no-preload-558000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:58:31.230671    7107 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:58:31.230718    7107 notify.go:220] Checking for updates...
	I1209 16:58:31.237624    7107 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:58:31.240620    7107 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:58:31.243643    7107 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:58:31.246645    7107 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:58:31.248109    7107 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:58:31.251984    7107 config.go:182] Loaded profile config "no-preload-558000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:58:31.252284    7107 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:58:31.255595    7107 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 16:58:31.260624    7107 start.go:297] selected driver: qemu2
	I1209 16:58:31.260631    7107 start.go:901] validating driver "qemu2" against &{Name:no-preload-558000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:no-preload-558000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:58:31.260692    7107 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:58:31.263212    7107 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:58:31.263232    7107 cni.go:84] Creating CNI manager for ""
	I1209 16:58:31.263262    7107 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:58:31.263294    7107 start.go:340] cluster config:
	{Name:no-preload-558000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-558000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:58:31.267642    7107 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:31.275538    7107 out.go:177] * Starting "no-preload-558000" primary control-plane node in "no-preload-558000" cluster
	I1209 16:58:31.279581    7107 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:58:31.279642    7107 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/no-preload-558000/config.json ...
	I1209 16:58:31.279658    7107 cache.go:107] acquiring lock: {Name:mk7b0270decfe460dea90d22d72bbcacdd41b330 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:31.279658    7107 cache.go:107] acquiring lock: {Name:mkc92f5b3033bc49eb857fe8afc652e5483485ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:31.279666    7107 cache.go:107] acquiring lock: {Name:mk3e6e07b140be9dfe99d9e3684974fc4c0073ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:31.279677    7107 cache.go:107] acquiring lock: {Name:mk0e779ac6e1c0f6b0916f1a966fb5687951fbcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:31.279690    7107 cache.go:107] acquiring lock: {Name:mk47649e1c8f141818f0904548861df40ddc9447 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:31.279742    7107 cache.go:115] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1209 16:58:31.279747    7107 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 93.167µs
	I1209 16:58:31.279754    7107 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1209 16:58:31.279762    7107 cache.go:115] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1209 16:58:31.279768    7107 cache.go:115] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1209 16:58:31.279772    7107 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 85.417µs
	I1209 16:58:31.279778    7107 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1209 16:58:31.279767    7107 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 90.083µs
	I1209 16:58:31.279776    7107 cache.go:107] acquiring lock: {Name:mka816f1d818910c192413c39f78afc400907907 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:31.279791    7107 cache.go:107] acquiring lock: {Name:mk407f771861a38e0b493f2fb2c684d1ec0071f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:31.279799    7107 cache.go:115] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1209 16:58:31.279803    7107 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 149.334µs
	I1209 16:58:31.279807    7107 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1209 16:58:31.279813    7107 cache.go:107] acquiring lock: {Name:mkc0d742c11a5f0e2818b1813d085fc38ccc568e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:31.279782    7107 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1209 16:58:31.279844    7107 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1209 16:58:31.279854    7107 cache.go:115] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1209 16:58:31.279858    7107 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 68.375µs
	I1209 16:58:31.279862    7107 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1209 16:58:31.279889    7107 cache.go:115] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1209 16:58:31.279894    7107 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 235.709µs
	I1209 16:58:31.279898    7107 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1209 16:58:31.279883    7107 cache.go:115] /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1209 16:58:31.279913    7107 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 99.583µs
	I1209 16:58:31.279916    7107 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1209 16:58:31.280091    7107 start.go:360] acquireMachinesLock for no-preload-558000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:31.283455    7107 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1209 16:58:31.691294    7107 start.go:364] duration metric: took 411.161375ms to acquireMachinesLock for "no-preload-558000"
	I1209 16:58:31.691486    7107 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:58:31.691516    7107 fix.go:54] fixHost starting: 
	I1209 16:58:31.692198    7107 fix.go:112] recreateIfNeeded on no-preload-558000: state=Stopped err=<nil>
	W1209 16:58:31.692227    7107 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:58:31.701329    7107 out.go:177] * Restarting existing qemu2 VM for "no-preload-558000" ...
	I1209 16:58:31.708409    7107 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:31.708633    7107 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:fe:81:fa:ba:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/disk.qcow2
	I1209 16:58:31.719483    7107 main.go:141] libmachine: STDOUT: 
	I1209 16:58:31.719551    7107 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:31.719654    7107 fix.go:56] duration metric: took 28.141834ms for fixHost
	I1209 16:58:31.719674    7107 start.go:83] releasing machines lock for "no-preload-558000", held for 28.312792ms
	W1209 16:58:31.719700    7107 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:58:31.719855    7107 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:31.719880    7107 start.go:729] Will try again in 5 seconds ...
	I1209 16:58:31.727087    7107 cache.go:162] opening:  /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1209 16:58:36.720104    7107 start.go:360] acquireMachinesLock for no-preload-558000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:36.720440    7107 start.go:364] duration metric: took 276.334µs to acquireMachinesLock for "no-preload-558000"
	I1209 16:58:36.720542    7107 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:58:36.720560    7107 fix.go:54] fixHost starting: 
	I1209 16:58:36.721227    7107 fix.go:112] recreateIfNeeded on no-preload-558000: state=Stopped err=<nil>
	W1209 16:58:36.721253    7107 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:58:36.726821    7107 out.go:177] * Restarting existing qemu2 VM for "no-preload-558000" ...
	I1209 16:58:36.733714    7107 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:36.733872    7107 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:fe:81:fa:ba:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/no-preload-558000/disk.qcow2
	I1209 16:58:36.744579    7107 main.go:141] libmachine: STDOUT: 
	I1209 16:58:36.744630    7107 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:36.744709    7107 fix.go:56] duration metric: took 24.1485ms for fixHost
	I1209 16:58:36.744730    7107 start.go:83] releasing machines lock for "no-preload-558000", held for 24.269583ms
	W1209 16:58:36.744943    7107 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-558000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-558000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:36.752854    7107 out.go:201] 
	W1209 16:58:36.756946    7107 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:58:36.756994    7107 out.go:270] * 
	* 
	W1209 16:58:36.759280    7107 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:58:36.766705    7107 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-558000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (69.456708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-877000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-877000 create -f testdata/busybox.yaml: exit status 1 (28.839708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-877000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-877000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000: exit status 7 (33.215041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-877000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000: exit status 7 (33.698125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-877000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-877000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-877000 describe deploy/metrics-server -n kube-system: exit status 1 (27.355667ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-877000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-877000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000: exit status 7 (33.580833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-877000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-877000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.214455s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-877000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-877000" primary control-plane node in "default-k8s-diff-port-877000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-877000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-877000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:58:36.062591    7156 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:58:36.062770    7156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:36.062773    7156 out.go:358] Setting ErrFile to fd 2...
	I1209 16:58:36.062775    7156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:36.062924    7156 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:58:36.063996    7156 out.go:352] Setting JSON to false
	I1209 16:58:36.081321    7156 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5286,"bootTime":1733787030,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:58:36.081402    7156 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:58:36.086215    7156 out.go:177] * [default-k8s-diff-port-877000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:58:36.090935    7156 notify.go:220] Checking for updates...
	I1209 16:58:36.095217    7156 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:58:36.099223    7156 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:58:36.102147    7156 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:58:36.109261    7156 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:58:36.117163    7156 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:58:36.121238    7156 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:58:36.125439    7156 config.go:182] Loaded profile config "default-k8s-diff-port-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:58:36.125717    7156 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:58:36.130210    7156 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 16:58:36.137167    7156 start.go:297] selected driver: qemu2
	I1209 16:58:36.137172    7156 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-877000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:58:36.137215    7156 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:58:36.139699    7156 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:58:36.139724    7156 cni.go:84] Creating CNI manager for ""
	I1209 16:58:36.139743    7156 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:58:36.139770    7156 start.go:340] cluster config:
	{Name:default-k8s-diff-port-877000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-877000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:58:36.144190    7156 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:36.152216    7156 out.go:177] * Starting "default-k8s-diff-port-877000" primary control-plane node in "default-k8s-diff-port-877000" cluster
	I1209 16:58:36.156192    7156 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:58:36.156208    7156 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:58:36.156220    7156 cache.go:56] Caching tarball of preloaded images
	I1209 16:58:36.156295    7156 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:58:36.156301    7156 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:58:36.156356    7156 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/default-k8s-diff-port-877000/config.json ...
	I1209 16:58:36.156666    7156 start.go:360] acquireMachinesLock for default-k8s-diff-port-877000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:36.156696    7156 start.go:364] duration metric: took 24.083µs to acquireMachinesLock for "default-k8s-diff-port-877000"
	I1209 16:58:36.156704    7156 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:58:36.156710    7156 fix.go:54] fixHost starting: 
	I1209 16:58:36.156824    7156 fix.go:112] recreateIfNeeded on default-k8s-diff-port-877000: state=Stopped err=<nil>
	W1209 16:58:36.156834    7156 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:58:36.161208    7156 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-877000" ...
	I1209 16:58:36.169207    7156 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:36.169248    7156 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:a3:de:be:64:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/disk.qcow2
	I1209 16:58:36.171437    7156 main.go:141] libmachine: STDOUT: 
	I1209 16:58:36.171461    7156 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:36.171491    7156 fix.go:56] duration metric: took 14.780875ms for fixHost
	I1209 16:58:36.171497    7156 start.go:83] releasing machines lock for "default-k8s-diff-port-877000", held for 14.796375ms
	W1209 16:58:36.171504    7156 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:58:36.171538    7156 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:36.171543    7156 start.go:729] Will try again in 5 seconds ...
	I1209 16:58:41.173782    7156 start.go:360] acquireMachinesLock for default-k8s-diff-port-877000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:41.174268    7156 start.go:364] duration metric: took 405.042µs to acquireMachinesLock for "default-k8s-diff-port-877000"
	I1209 16:58:41.174376    7156 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:58:41.174397    7156 fix.go:54] fixHost starting: 
	I1209 16:58:41.175214    7156 fix.go:112] recreateIfNeeded on default-k8s-diff-port-877000: state=Stopped err=<nil>
	W1209 16:58:41.175245    7156 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:58:41.195926    7156 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-877000" ...
	I1209 16:58:41.200705    7156 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:41.200885    7156 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:a3:de:be:64:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/default-k8s-diff-port-877000/disk.qcow2
	I1209 16:58:41.211399    7156 main.go:141] libmachine: STDOUT: 
	I1209 16:58:41.211457    7156 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:41.211538    7156 fix.go:56] duration metric: took 37.142083ms for fixHost
	I1209 16:58:41.211555    7156 start.go:83] releasing machines lock for "default-k8s-diff-port-877000", held for 37.265625ms
	W1209 16:58:41.211768    7156 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-877000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-877000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:41.219666    7156 out.go:201] 
	W1209 16:58:41.222706    7156 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:58:41.222730    7156 out.go:270] * 
	* 
	W1209 16:58:41.225390    7156 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:58:41.233651    7156 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-877000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000: exit status 7 (73.255791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-558000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (35.320125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-558000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-558000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-558000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.5695ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-558000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-558000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (33.327125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-558000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (33.447084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-558000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-558000 --alsologtostderr -v=1: exit status 83 (44.030625ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-558000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-558000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:58:37.061436    7175 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:58:37.061640    7175 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:37.061643    7175 out.go:358] Setting ErrFile to fd 2...
	I1209 16:58:37.061645    7175 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:37.061782    7175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:58:37.062009    7175 out.go:352] Setting JSON to false
	I1209 16:58:37.062020    7175 mustload.go:65] Loading cluster: no-preload-558000
	I1209 16:58:37.062262    7175 config.go:182] Loaded profile config "no-preload-558000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:58:37.065697    7175 out.go:177] * The control-plane node no-preload-558000 host is not running: state=Stopped
	I1209 16:58:37.069669    7175 out.go:177]   To start a cluster, run: "minikube start -p no-preload-558000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-558000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (33.244917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (33.534916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-757000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-757000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.891613625s)

                                                
                                                
-- stdout --
	* [newest-cni-757000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-757000" primary control-plane node in "newest-cni-757000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-757000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:58:37.402383    7192 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:58:37.402544    7192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:37.402548    7192 out.go:358] Setting ErrFile to fd 2...
	I1209 16:58:37.402550    7192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:37.402697    7192 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:58:37.403885    7192 out.go:352] Setting JSON to false
	I1209 16:58:37.421541    7192 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5287,"bootTime":1733787030,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:58:37.421610    7192 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:58:37.426666    7192 out.go:177] * [newest-cni-757000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:58:37.434692    7192 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:58:37.434756    7192 notify.go:220] Checking for updates...
	I1209 16:58:37.441609    7192 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:58:37.443031    7192 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:58:37.446631    7192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:58:37.449635    7192 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:58:37.452674    7192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:58:37.455952    7192 config.go:182] Loaded profile config "default-k8s-diff-port-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:58:37.456014    7192 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:58:37.456074    7192 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:58:37.459645    7192 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:58:37.466652    7192 start.go:297] selected driver: qemu2
	I1209 16:58:37.466658    7192 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:58:37.466668    7192 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:58:37.469245    7192 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1209 16:58:37.469280    7192 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1209 16:58:37.472683    7192 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:58:37.475787    7192 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 16:58:37.475802    7192 cni.go:84] Creating CNI manager for ""
	I1209 16:58:37.475823    7192 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:58:37.475827    7192 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 16:58:37.475854    7192 start.go:340] cluster config:
	{Name:newest-cni-757000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-757000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:58:37.480640    7192 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:37.488646    7192 out.go:177] * Starting "newest-cni-757000" primary control-plane node in "newest-cni-757000" cluster
	I1209 16:58:37.492538    7192 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:58:37.492558    7192 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:58:37.492567    7192 cache.go:56] Caching tarball of preloaded images
	I1209 16:58:37.492651    7192 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:58:37.492657    7192 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:58:37.492722    7192 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/newest-cni-757000/config.json ...
	I1209 16:58:37.492734    7192 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/newest-cni-757000/config.json: {Name:mkbc0c81398579c1ed419cad2378c62b5f5809b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:58:37.493173    7192 start.go:360] acquireMachinesLock for newest-cni-757000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:37.493224    7192 start.go:364] duration metric: took 44.791µs to acquireMachinesLock for "newest-cni-757000"
	I1209 16:58:37.493237    7192 start.go:93] Provisioning new machine with config: &{Name:newest-cni-757000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-757000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:58:37.493271    7192 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:58:37.497675    7192 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 16:58:37.515642    7192 start.go:159] libmachine.API.Create for "newest-cni-757000" (driver="qemu2")
	I1209 16:58:37.515669    7192 client.go:168] LocalClient.Create starting
	I1209 16:58:37.515746    7192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:58:37.515787    7192 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:37.515796    7192 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:37.515832    7192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:58:37.515863    7192 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:37.515872    7192 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:37.516360    7192 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:58:37.675764    7192 main.go:141] libmachine: Creating SSH key...
	I1209 16:58:37.830484    7192 main.go:141] libmachine: Creating Disk image...
	I1209 16:58:37.830491    7192 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:58:37.830762    7192 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/disk.qcow2
	I1209 16:58:37.841026    7192 main.go:141] libmachine: STDOUT: 
	I1209 16:58:37.841047    7192 main.go:141] libmachine: STDERR: 
	I1209 16:58:37.841108    7192 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/disk.qcow2 +20000M
	I1209 16:58:37.849510    7192 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:58:37.849526    7192 main.go:141] libmachine: STDERR: 
	I1209 16:58:37.849537    7192 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/disk.qcow2
	I1209 16:58:37.849542    7192 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:58:37.849554    7192 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:37.849589    7192 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:8b:4f:13:6a:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/disk.qcow2
	I1209 16:58:37.851421    7192 main.go:141] libmachine: STDOUT: 
	I1209 16:58:37.851436    7192 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:37.851453    7192 client.go:171] duration metric: took 335.778125ms to LocalClient.Create
	I1209 16:58:39.853658    7192 start.go:128] duration metric: took 2.360355125s to createHost
	I1209 16:58:39.853741    7192 start.go:83] releasing machines lock for "newest-cni-757000", held for 2.360507584s
	W1209 16:58:39.853849    7192 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:39.866129    7192 out.go:177] * Deleting "newest-cni-757000" in qemu2 ...
	W1209 16:58:39.896385    7192 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:39.896408    7192 start.go:729] Will try again in 5 seconds ...
	I1209 16:58:44.898654    7192 start.go:360] acquireMachinesLock for newest-cni-757000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:44.899200    7192 start.go:364] duration metric: took 416.625µs to acquireMachinesLock for "newest-cni-757000"
	I1209 16:58:44.899322    7192 start.go:93] Provisioning new machine with config: &{Name:newest-cni-757000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-757000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:58:44.899610    7192 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:58:44.909241    7192 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 16:58:44.956037    7192 start.go:159] libmachine.API.Create for "newest-cni-757000" (driver="qemu2")
	I1209 16:58:44.956097    7192 client.go:168] LocalClient.Create starting
	I1209 16:58:44.956228    7192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:58:44.956325    7192 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:44.956341    7192 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:44.956408    7192 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:58:44.956467    7192 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:44.956481    7192 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:44.960650    7192 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:58:45.137422    7192 main.go:141] libmachine: Creating SSH key...
	I1209 16:58:45.195872    7192 main.go:141] libmachine: Creating Disk image...
	I1209 16:58:45.195878    7192 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:58:45.196096    7192 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/disk.qcow2
	I1209 16:58:45.205893    7192 main.go:141] libmachine: STDOUT: 
	I1209 16:58:45.205914    7192 main.go:141] libmachine: STDERR: 
	I1209 16:58:45.205964    7192 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/disk.qcow2 +20000M
	I1209 16:58:45.214475    7192 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:58:45.214488    7192 main.go:141] libmachine: STDERR: 
	I1209 16:58:45.214498    7192 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/disk.qcow2
	I1209 16:58:45.214504    7192 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:58:45.214514    7192 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:45.214557    7192 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:8f:4f:03:39:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/disk.qcow2
	I1209 16:58:45.216333    7192 main.go:141] libmachine: STDOUT: 
	I1209 16:58:45.216352    7192 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:45.216367    7192 client.go:171] duration metric: took 260.264792ms to LocalClient.Create
	I1209 16:58:47.218549    7192 start.go:128] duration metric: took 2.318912041s to createHost
	I1209 16:58:47.218666    7192 start.go:83] releasing machines lock for "newest-cni-757000", held for 2.319407417s
	W1209 16:58:47.219065    7192 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-757000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-757000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:47.227632    7192 out.go:201] 
	W1209 16:58:47.238552    7192 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:58:47.238576    7192 out.go:270] * 
	* 
	W1209 16:58:47.241452    7192 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:58:47.249536    7192 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-757000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000: exit status 7 (70.049125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-877000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000: exit status 7 (35.468292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-877000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-877000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-877000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.4095ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-877000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-877000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000: exit status 7 (33.01075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-877000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000: exit status 7 (33.2105ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-877000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-877000 --alsologtostderr -v=1: exit status 83 (45.368166ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-877000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-877000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:58:41.526745    7216 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:58:41.526932    7216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:41.526935    7216 out.go:358] Setting ErrFile to fd 2...
	I1209 16:58:41.526938    7216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:41.527063    7216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:58:41.527290    7216 out.go:352] Setting JSON to false
	I1209 16:58:41.527298    7216 mustload.go:65] Loading cluster: default-k8s-diff-port-877000
	I1209 16:58:41.527506    7216 config.go:182] Loaded profile config "default-k8s-diff-port-877000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:58:41.531676    7216 out.go:177] * The control-plane node default-k8s-diff-port-877000 host is not running: state=Stopped
	I1209 16:58:41.535624    7216 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-877000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-877000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000: exit status 7 (33.462167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-877000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000: exit status 7 (33.523833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-877000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-583000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-583000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.942536083s)

                                                
                                                
-- stdout --
	* [embed-certs-583000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-583000" primary control-plane node in "embed-certs-583000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-583000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:58:41.872502    7233 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:58:41.872715    7233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:41.872718    7233 out.go:358] Setting ErrFile to fd 2...
	I1209 16:58:41.872721    7233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:41.872866    7233 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:58:41.874063    7233 out.go:352] Setting JSON to false
	I1209 16:58:41.891893    7233 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5291,"bootTime":1733787030,"procs":536,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:58:41.891967    7233 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:58:41.895654    7233 out.go:177] * [embed-certs-583000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:58:41.901613    7233 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:58:41.901692    7233 notify.go:220] Checking for updates...
	I1209 16:58:41.908613    7233 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:58:41.911648    7233 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:58:41.915620    7233 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:58:41.918676    7233 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:58:41.921641    7233 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:58:41.925010    7233 config.go:182] Loaded profile config "multinode-350000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:58:41.925079    7233 config.go:182] Loaded profile config "newest-cni-757000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:58:41.925127    7233 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:58:41.928583    7233 out.go:177] * Using the qemu2 driver based on user configuration
	I1209 16:58:41.935615    7233 start.go:297] selected driver: qemu2
	I1209 16:58:41.935622    7233 start.go:901] validating driver "qemu2" against <nil>
	I1209 16:58:41.935629    7233 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:58:41.938193    7233 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 16:58:41.941599    7233 out.go:177] * Automatically selected the socket_vmnet network
	I1209 16:58:41.945679    7233 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:58:41.945710    7233 cni.go:84] Creating CNI manager for ""
	I1209 16:58:41.945740    7233 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:58:41.945744    7233 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 16:58:41.945776    7233 start.go:340] cluster config:
	{Name:embed-certs-583000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-583000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:58:41.950506    7233 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:41.958496    7233 out.go:177] * Starting "embed-certs-583000" primary control-plane node in "embed-certs-583000" cluster
	I1209 16:58:41.962541    7233 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:58:41.962557    7233 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:58:41.962570    7233 cache.go:56] Caching tarball of preloaded images
	I1209 16:58:41.962645    7233 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:58:41.962656    7233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:58:41.962710    7233 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/embed-certs-583000/config.json ...
	I1209 16:58:41.962721    7233 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/embed-certs-583000/config.json: {Name:mk160d29f200043300f5f5b29f0fcd9621155acf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 16:58:41.963139    7233 start.go:360] acquireMachinesLock for embed-certs-583000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:41.963190    7233 start.go:364] duration metric: took 44.167µs to acquireMachinesLock for "embed-certs-583000"
	I1209 16:58:41.963203    7233 start.go:93] Provisioning new machine with config: &{Name:embed-certs-583000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-583000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:58:41.963237    7233 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:58:41.971600    7233 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 16:58:41.989210    7233 start.go:159] libmachine.API.Create for "embed-certs-583000" (driver="qemu2")
	I1209 16:58:41.989239    7233 client.go:168] LocalClient.Create starting
	I1209 16:58:41.989354    7233 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:58:41.989394    7233 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:41.989408    7233 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:41.989446    7233 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:58:41.989480    7233 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:41.989491    7233 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:41.989942    7233 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:58:42.148286    7233 main.go:141] libmachine: Creating SSH key...
	I1209 16:58:42.254484    7233 main.go:141] libmachine: Creating Disk image...
	I1209 16:58:42.254490    7233 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:58:42.254700    7233 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/disk.qcow2
	I1209 16:58:42.264779    7233 main.go:141] libmachine: STDOUT: 
	I1209 16:58:42.264793    7233 main.go:141] libmachine: STDERR: 
	I1209 16:58:42.264860    7233 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/disk.qcow2 +20000M
	I1209 16:58:42.273436    7233 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:58:42.273450    7233 main.go:141] libmachine: STDERR: 
	I1209 16:58:42.273467    7233 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/disk.qcow2
	I1209 16:58:42.273471    7233 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:58:42.273485    7233 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:42.273511    7233 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:f1:62:70:2b:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/disk.qcow2
	I1209 16:58:42.275361    7233 main.go:141] libmachine: STDOUT: 
	I1209 16:58:42.275374    7233 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:42.275392    7233 client.go:171] duration metric: took 286.146333ms to LocalClient.Create
	I1209 16:58:44.277632    7233 start.go:128] duration metric: took 2.314362125s to createHost
	I1209 16:58:44.277705    7233 start.go:83] releasing machines lock for "embed-certs-583000", held for 2.314507625s
	W1209 16:58:44.277836    7233 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:44.289022    7233 out.go:177] * Deleting "embed-certs-583000" in qemu2 ...
	W1209 16:58:44.328424    7233 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:44.328455    7233 start.go:729] Will try again in 5 seconds ...
	I1209 16:58:49.330637    7233 start.go:360] acquireMachinesLock for embed-certs-583000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:49.331109    7233 start.go:364] duration metric: took 388.375µs to acquireMachinesLock for "embed-certs-583000"
	I1209 16:58:49.331206    7233 start.go:93] Provisioning new machine with config: &{Name:embed-certs-583000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-583000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1209 16:58:49.331470    7233 start.go:125] createHost starting for "" (driver="qemu2")
	I1209 16:58:49.340067    7233 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1209 16:58:49.388790    7233 start.go:159] libmachine.API.Create for "embed-certs-583000" (driver="qemu2")
	I1209 16:58:49.388856    7233 client.go:168] LocalClient.Create starting
	I1209 16:58:49.388986    7233 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/ca.pem
	I1209 16:58:49.389043    7233 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:49.389058    7233 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:49.389128    7233 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20062-1231/.minikube/certs/cert.pem
	I1209 16:58:49.389160    7233 main.go:141] libmachine: Decoding PEM data...
	I1209 16:58:49.389173    7233 main.go:141] libmachine: Parsing certificate...
	I1209 16:58:49.389811    7233 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1209 16:58:49.588872    7233 main.go:141] libmachine: Creating SSH key...
	I1209 16:58:49.716144    7233 main.go:141] libmachine: Creating Disk image...
	I1209 16:58:49.716160    7233 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1209 16:58:49.716393    7233 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/disk.qcow2.raw /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/disk.qcow2
	I1209 16:58:49.726597    7233 main.go:141] libmachine: STDOUT: 
	I1209 16:58:49.726617    7233 main.go:141] libmachine: STDERR: 
	I1209 16:58:49.726680    7233 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/disk.qcow2 +20000M
	I1209 16:58:49.735369    7233 main.go:141] libmachine: STDOUT: Image resized.
	
	I1209 16:58:49.735385    7233 main.go:141] libmachine: STDERR: 
	I1209 16:58:49.735401    7233 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/disk.qcow2
	I1209 16:58:49.735405    7233 main.go:141] libmachine: Starting QEMU VM...
	I1209 16:58:49.735415    7233 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:49.735440    7233 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:e8:5d:21:17:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/disk.qcow2
	I1209 16:58:49.737276    7233 main.go:141] libmachine: STDOUT: 
	I1209 16:58:49.737290    7233 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:49.737309    7233 client.go:171] duration metric: took 348.448209ms to LocalClient.Create
	I1209 16:58:51.739515    7233 start.go:128] duration metric: took 2.408002958s to createHost
	I1209 16:58:51.739561    7233 start.go:83] releasing machines lock for "embed-certs-583000", held for 2.408430916s
	W1209 16:58:51.739863    7233 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-583000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-583000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:51.751573    7233 out.go:201] 
	W1209 16:58:51.757487    7233 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:58:51.757526    7233 out.go:270] * 
	* 
	W1209 16:58:51.760126    7233 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:58:51.768502    7233 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-583000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000: exit status 7 (71.452875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-583000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (6.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-757000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-757000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (6.138429958s)

                                                
                                                
-- stdout --
	* [newest-cni-757000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-757000" primary control-plane node in "newest-cni-757000" cluster
	* Restarting existing qemu2 VM for "newest-cni-757000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-757000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:58:50.728027    7281 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:58:50.728196    7281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:50.728200    7281 out.go:358] Setting ErrFile to fd 2...
	I1209 16:58:50.728210    7281 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:50.728338    7281 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:58:50.729381    7281 out.go:352] Setting JSON to false
	I1209 16:58:50.746905    7281 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5300,"bootTime":1733787030,"procs":535,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:58:50.746980    7281 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:58:50.751575    7281 out.go:177] * [newest-cni-757000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:58:50.759476    7281 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:58:50.759505    7281 notify.go:220] Checking for updates...
	I1209 16:58:50.765460    7281 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:58:50.768512    7281 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:58:50.771491    7281 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:58:50.774534    7281 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:58:50.777455    7281 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:58:50.780819    7281 config.go:182] Loaded profile config "newest-cni-757000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:58:50.781078    7281 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:58:50.784464    7281 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 16:58:50.791486    7281 start.go:297] selected driver: qemu2
	I1209 16:58:50.791494    7281 start.go:901] validating driver "qemu2" against &{Name:newest-cni-757000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:newest-cni-757000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:58:50.791554    7281 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:58:50.794041    7281 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1209 16:58:50.794063    7281 cni.go:84] Creating CNI manager for ""
	I1209 16:58:50.794082    7281 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:58:50.794109    7281 start.go:340] cluster config:
	{Name:newest-cni-757000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-757000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:58:50.798370    7281 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:50.806482    7281 out.go:177] * Starting "newest-cni-757000" primary control-plane node in "newest-cni-757000" cluster
	I1209 16:58:50.809515    7281 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:58:50.809536    7281 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:58:50.809549    7281 cache.go:56] Caching tarball of preloaded images
	I1209 16:58:50.809622    7281 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:58:50.809634    7281 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:58:50.809685    7281 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/newest-cni-757000/config.json ...
	I1209 16:58:50.810210    7281 start.go:360] acquireMachinesLock for newest-cni-757000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:51.739715    7281 start.go:364] duration metric: took 929.456209ms to acquireMachinesLock for "newest-cni-757000"
	I1209 16:58:51.739887    7281 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:58:51.739918    7281 fix.go:54] fixHost starting: 
	I1209 16:58:51.740593    7281 fix.go:112] recreateIfNeeded on newest-cni-757000: state=Stopped err=<nil>
	W1209 16:58:51.740633    7281 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:58:51.754467    7281 out.go:177] * Restarting existing qemu2 VM for "newest-cni-757000" ...
	I1209 16:58:51.761578    7281 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:51.761816    7281 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:8f:4f:03:39:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/disk.qcow2
	I1209 16:58:51.773011    7281 main.go:141] libmachine: STDOUT: 
	I1209 16:58:51.773097    7281 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:51.773267    7281 fix.go:56] duration metric: took 33.351791ms for fixHost
	I1209 16:58:51.773287    7281 start.go:83] releasing machines lock for "newest-cni-757000", held for 33.51075ms
	W1209 16:58:51.773316    7281 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:58:51.773460    7281 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:51.773475    7281 start.go:729] Will try again in 5 seconds ...
	I1209 16:58:56.775720    7281 start.go:360] acquireMachinesLock for newest-cni-757000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:56.776211    7281 start.go:364] duration metric: took 365.333µs to acquireMachinesLock for "newest-cni-757000"
	I1209 16:58:56.776373    7281 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:58:56.776398    7281 fix.go:54] fixHost starting: 
	I1209 16:58:56.777113    7281 fix.go:112] recreateIfNeeded on newest-cni-757000: state=Stopped err=<nil>
	W1209 16:58:56.777144    7281 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:58:56.786693    7281 out.go:177] * Restarting existing qemu2 VM for "newest-cni-757000" ...
	I1209 16:58:56.790701    7281 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:56.790915    7281 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:8f:4f:03:39:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/newest-cni-757000/disk.qcow2
	I1209 16:58:56.800293    7281 main.go:141] libmachine: STDOUT: 
	I1209 16:58:56.800352    7281 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:56.800422    7281 fix.go:56] duration metric: took 24.027708ms for fixHost
	I1209 16:58:56.800441    7281 start.go:83] releasing machines lock for "newest-cni-757000", held for 24.205041ms
	W1209 16:58:56.800603    7281 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-757000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-757000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:56.808680    7281 out.go:201] 
	W1209 16:58:56.811697    7281 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:58:56.811720    7281 out.go:270] * 
	* 
	W1209 16:58:56.814421    7281 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:58:56.821747    7281 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-757000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000: exit status 7 (73.8455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (6.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-583000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-583000 create -f testdata/busybox.yaml: exit status 1 (29.925625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-583000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-583000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000: exit status 7 (33.317792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-583000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000: exit status 7 (32.9625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-583000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-583000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-583000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-583000 describe deploy/metrics-server -n kube-system: exit status 1 (26.844375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-583000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-583000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000: exit status 7 (32.548208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-583000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-583000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-583000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.198806167s)

                                                
                                                
-- stdout --
	* [embed-certs-583000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-583000" primary control-plane node in "embed-certs-583000" cluster
	* Restarting existing qemu2 VM for "embed-certs-583000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-583000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:58:55.657546    7326 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:58:55.657723    7326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:55.657727    7326 out.go:358] Setting ErrFile to fd 2...
	I1209 16:58:55.657729    7326 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:55.657853    7326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:58:55.658945    7326 out.go:352] Setting JSON to false
	I1209 16:58:55.676504    7326 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5305,"bootTime":1733787030,"procs":537,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 16:58:55.676584    7326 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 16:58:55.681423    7326 out.go:177] * [embed-certs-583000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 16:58:55.688330    7326 notify.go:220] Checking for updates...
	I1209 16:58:55.691360    7326 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 16:58:55.695261    7326 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 16:58:55.698426    7326 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 16:58:55.702426    7326 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 16:58:55.705369    7326 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 16:58:55.709344    7326 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 16:58:55.713666    7326 config.go:182] Loaded profile config "embed-certs-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:58:55.713952    7326 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 16:58:55.718303    7326 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 16:58:55.725367    7326 start.go:297] selected driver: qemu2
	I1209 16:58:55.725373    7326 start.go:901] validating driver "qemu2" against &{Name:embed-certs-583000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:embed-certs-583000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:58:55.725421    7326 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 16:58:55.728029    7326 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1209 16:58:55.728060    7326 cni.go:84] Creating CNI manager for ""
	I1209 16:58:55.728086    7326 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 16:58:55.728106    7326 start.go:340] cluster config:
	{Name:embed-certs-583000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-583000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 16:58:55.732713    7326 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 16:58:55.741273    7326 out.go:177] * Starting "embed-certs-583000" primary control-plane node in "embed-certs-583000" cluster
	I1209 16:58:55.745393    7326 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 16:58:55.745412    7326 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 16:58:55.745429    7326 cache.go:56] Caching tarball of preloaded images
	I1209 16:58:55.745496    7326 preload.go:172] Found /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1209 16:58:55.745502    7326 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 16:58:55.745577    7326 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/embed-certs-583000/config.json ...
	I1209 16:58:55.746162    7326 start.go:360] acquireMachinesLock for embed-certs-583000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:58:55.746193    7326 start.go:364] duration metric: took 24.625µs to acquireMachinesLock for "embed-certs-583000"
	I1209 16:58:55.746203    7326 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:58:55.746208    7326 fix.go:54] fixHost starting: 
	I1209 16:58:55.746329    7326 fix.go:112] recreateIfNeeded on embed-certs-583000: state=Stopped err=<nil>
	W1209 16:58:55.746337    7326 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:58:55.749345    7326 out.go:177] * Restarting existing qemu2 VM for "embed-certs-583000" ...
	I1209 16:58:55.757386    7326 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:58:55.757439    7326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:e8:5d:21:17:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/disk.qcow2
	I1209 16:58:55.759784    7326 main.go:141] libmachine: STDOUT: 
	I1209 16:58:55.759803    7326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:58:55.759833    7326 fix.go:56] duration metric: took 13.623833ms for fixHost
	I1209 16:58:55.759839    7326 start.go:83] releasing machines lock for "embed-certs-583000", held for 13.640792ms
	W1209 16:58:55.759845    7326 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:58:55.759901    7326 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:58:55.759906    7326 start.go:729] Will try again in 5 seconds ...
	I1209 16:59:00.762164    7326 start.go:360] acquireMachinesLock for embed-certs-583000: {Name:mk41e7410fafc3872a62350e32e6ae614bf532a6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1209 16:59:00.762727    7326 start.go:364] duration metric: took 448.792µs to acquireMachinesLock for "embed-certs-583000"
	I1209 16:59:00.762871    7326 start.go:96] Skipping create...Using existing machine configuration
	I1209 16:59:00.762891    7326 fix.go:54] fixHost starting: 
	I1209 16:59:00.763702    7326 fix.go:112] recreateIfNeeded on embed-certs-583000: state=Stopped err=<nil>
	W1209 16:59:00.763731    7326 fix.go:138] unexpected machine state, will restart: <nil>
	I1209 16:59:00.768365    7326 out.go:177] * Restarting existing qemu2 VM for "embed-certs-583000" ...
	I1209 16:59:00.777183    7326 qemu.go:418] Using hvf for hardware acceleration
	I1209 16:59:00.777467    7326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:e8:5d:21:17:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20062-1231/.minikube/machines/embed-certs-583000/disk.qcow2
	I1209 16:59:00.788115    7326 main.go:141] libmachine: STDOUT: 
	I1209 16:59:00.788169    7326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1209 16:59:00.788276    7326 fix.go:56] duration metric: took 25.386542ms for fixHost
	I1209 16:59:00.788293    7326 start.go:83] releasing machines lock for "embed-certs-583000", held for 25.540333ms
	W1209 16:59:00.788502    7326 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-583000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-583000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1209 16:59:00.795132    7326 out.go:201] 
	W1209 16:59:00.802859    7326 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1209 16:59:00.802909    7326 out.go:270] * 
	* 
	W1209 16:59:00.805328    7326 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 16:59:00.811762    7326 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-583000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000: exit status 7 (74.283166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-583000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-757000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000: exit status 7 (33.095083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-757000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-757000 --alsologtostderr -v=1: exit status 83 (45.475416ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-757000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-757000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:58:57.019909    7342 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:58:57.020110    7342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:57.020113    7342 out.go:358] Setting ErrFile to fd 2...
	I1209 16:58:57.020115    7342 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:58:57.020252    7342 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:58:57.020471    7342 out.go:352] Setting JSON to false
	I1209 16:58:57.020478    7342 mustload.go:65] Loading cluster: newest-cni-757000
	I1209 16:58:57.020714    7342 config.go:182] Loaded profile config "newest-cni-757000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:58:57.025396    7342 out.go:177] * The control-plane node newest-cni-757000 host is not running: state=Stopped
	I1209 16:58:57.029151    7342 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-757000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-757000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000: exit status 7 (33.449708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-757000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000: exit status 7 (33.203666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-757000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-583000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000: exit status 7 (36.495209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-583000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-583000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-583000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-583000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.269708ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-583000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-583000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000: exit status 7 (35.237ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-583000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-583000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000: exit status 7 (34.183709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-583000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-583000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-583000 --alsologtostderr -v=1: exit status 83 (45.183ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-583000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-583000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 16:59:01.112038    7371 out.go:345] Setting OutFile to fd 1 ...
	I1209 16:59:01.112226    7371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:59:01.112229    7371 out.go:358] Setting ErrFile to fd 2...
	I1209 16:59:01.112232    7371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 16:59:01.112359    7371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 16:59:01.112587    7371 out.go:352] Setting JSON to false
	I1209 16:59:01.112594    7371 mustload.go:65] Loading cluster: embed-certs-583000
	I1209 16:59:01.112805    7371 config.go:182] Loaded profile config "embed-certs-583000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 16:59:01.115977    7371 out.go:177] * The control-plane node embed-certs-583000 host is not running: state=Stopped
	I1209 16:59:01.119966    7371 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-583000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-583000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000: exit status 7 (33.22625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-583000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000: exit status 7 (34.172958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-583000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    

Test pass (152/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.2/json-events 10.31
13 TestDownloadOnly/v1.31.2/preload-exists 0
16 TestDownloadOnly/v1.31.2/kubectl 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.08
18 TestDownloadOnly/v1.31.2/DeleteAll 0.12
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.37
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 197
29 TestAddons/serial/Volcano 38.93
31 TestAddons/serial/GCPAuth/Namespaces 0.08
32 TestAddons/serial/GCPAuth/FakeCredentials 7.36
35 TestAddons/parallel/Registry 15.31
36 TestAddons/parallel/Ingress 18.7
37 TestAddons/parallel/InspektorGadget 11.29
38 TestAddons/parallel/MetricsServer 5.29
40 TestAddons/parallel/CSI 44
41 TestAddons/parallel/Headlamp 16.56
42 TestAddons/parallel/CloudSpanner 6.17
43 TestAddons/parallel/LocalPath 51.9
44 TestAddons/parallel/NvidiaDevicePlugin 6.15
45 TestAddons/parallel/Yakd 10.23
47 TestAddons/StoppedEnableDisable 12.43
55 TestHyperKitDriverInstallOrUpdate 14.21
58 TestErrorSpam/setup 36.25
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.26
61 TestErrorSpam/pause 0.67
62 TestErrorSpam/unpause 0.6
63 TestErrorSpam/stop 64.29
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 49.72
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 38.36
70 TestFunctional/serial/KubeContext 0.03
71 TestFunctional/serial/KubectlGetPods 0.04
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.19
75 TestFunctional/serial/CacheCmd/cache/add_local 1.12
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
77 TestFunctional/serial/CacheCmd/cache/list 0.04
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
79 TestFunctional/serial/CacheCmd/cache/cache_reload 0.72
80 TestFunctional/serial/CacheCmd/cache/delete 0.08
81 TestFunctional/serial/MinikubeKubectlCmd 0.8
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.17
83 TestFunctional/serial/ExtraConfig 37.95
84 TestFunctional/serial/ComponentHealth 0.04
85 TestFunctional/serial/LogsCmd 0.64
86 TestFunctional/serial/LogsFileCmd 0.67
87 TestFunctional/serial/InvalidService 4.59
89 TestFunctional/parallel/ConfigCmd 0.24
90 TestFunctional/parallel/DashboardCmd 9.72
91 TestFunctional/parallel/DryRun 0.24
92 TestFunctional/parallel/InternationalLanguage 0.12
93 TestFunctional/parallel/StatusCmd 0.26
98 TestFunctional/parallel/AddonsCmd 0.11
99 TestFunctional/parallel/PersistentVolumeClaim 25.73
101 TestFunctional/parallel/SSHCmd 0.14
102 TestFunctional/parallel/CpCmd 0.45
104 TestFunctional/parallel/FileSync 0.07
105 TestFunctional/parallel/CertSync 0.44
109 TestFunctional/parallel/NodeLabels 0.04
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
113 TestFunctional/parallel/License 0.27
114 TestFunctional/parallel/Version/short 0.04
115 TestFunctional/parallel/Version/components 0.16
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.09
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.09
120 TestFunctional/parallel/ImageCommands/ImageBuild 1.79
121 TestFunctional/parallel/ImageCommands/Setup 1.74
122 TestFunctional/parallel/DockerEnv/bash 0.31
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.29
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.46
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.1
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.37
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.14
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.27
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.19
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
144 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
145 TestFunctional/parallel/ServiceCmd/List 0.32
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.13
148 TestFunctional/parallel/ServiceCmd/Format 0.11
149 TestFunctional/parallel/ServiceCmd/URL 0.13
150 TestFunctional/parallel/ProfileCmd/profile_not_create 0.15
151 TestFunctional/parallel/ProfileCmd/profile_list 0.15
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.14
153 TestFunctional/parallel/MountCmd/any-port 6.09
154 TestFunctional/parallel/MountCmd/specific-port 0.93
155 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.01
168 TestMultiControlPlane/serial/CopyFile 0.04
176 TestImageBuild/serial/Setup 34.23
177 TestImageBuild/serial/NormalBuild 1.33
178 TestImageBuild/serial/BuildWithBuildArg 0.44
179 TestImageBuild/serial/BuildWithDockerIgnore 0.33
180 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.31
185 TestJSONOutput/start/Audit 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 7.06
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.22
212 TestMainNoArgs 0.04
259 TestStoppedBinaryUpgrade/Setup 1.07
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
276 TestNoKubernetes/serial/ProfileList 31.51
277 TestNoKubernetes/serial/Stop 3.18
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.73
294 TestStartStop/group/old-k8s-version/serial/Stop 3.56
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
307 TestStartStop/group/no-preload/serial/Stop 2.87
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.89
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
329 TestStartStop/group/newest-cni/serial/Stop 3.17
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
334 TestStartStop/group/embed-certs/serial/Stop 3.42
335 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1209 15:43:11.218156    1742 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1209 15:43:11.218670    1742 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-632000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-632000: exit status 85 (99.292541ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-632000 | jenkins | v1.34.0 | 09 Dec 24 15:42 PST |          |
	|         | -p download-only-632000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 15:42:47
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 15:42:47.863790    1743 out.go:345] Setting OutFile to fd 1 ...
	I1209 15:42:47.863972    1743 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 15:42:47.863976    1743 out.go:358] Setting ErrFile to fd 2...
	I1209 15:42:47.863978    1743 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 15:42:47.864116    1743 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	W1209 15:42:47.864209    1743 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20062-1231/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20062-1231/.minikube/config/config.json: no such file or directory
	I1209 15:42:47.865671    1743 out.go:352] Setting JSON to true
	I1209 15:42:47.884799    1743 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":737,"bootTime":1733787030,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 15:42:47.884913    1743 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 15:42:47.890615    1743 out.go:97] [download-only-632000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 15:42:47.890765    1743 notify.go:220] Checking for updates...
	W1209 15:42:47.890833    1743 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball: no such file or directory
	I1209 15:42:47.894532    1743 out.go:169] MINIKUBE_LOCATION=20062
	I1209 15:42:47.897567    1743 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 15:42:47.902571    1743 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 15:42:47.906589    1743 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 15:42:47.908002    1743 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	W1209 15:42:47.913645    1743 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 15:42:47.913900    1743 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 15:42:47.917605    1743 out.go:97] Using the qemu2 driver based on user configuration
	I1209 15:42:47.917627    1743 start.go:297] selected driver: qemu2
	I1209 15:42:47.917643    1743 start.go:901] validating driver "qemu2" against <nil>
	I1209 15:42:47.917734    1743 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 15:42:47.921627    1743 out.go:169] Automatically selected the socket_vmnet network
	I1209 15:42:47.928592    1743 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1209 15:42:47.928688    1743 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 15:42:47.928728    1743 cni.go:84] Creating CNI manager for ""
	I1209 15:42:47.928763    1743 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1209 15:42:47.928831    1743 start.go:340] cluster config:
	{Name:download-only-632000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-632000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 15:42:47.933569    1743 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 15:42:47.937467    1743 out.go:97] Downloading VM boot image ...
	I1209 15:42:47.937480    1743 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso
	I1209 15:42:58.466568    1743 out.go:97] Starting "download-only-632000" primary control-plane node in "download-only-632000" cluster
	I1209 15:42:58.466600    1743 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 15:42:58.521837    1743 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1209 15:42:58.521856    1743 cache.go:56] Caching tarball of preloaded images
	I1209 15:42:58.522031    1743 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 15:42:58.528151    1743 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1209 15:42:58.528158    1743 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1209 15:42:58.613237    1743 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1209 15:43:09.938403    1743 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1209 15:43:09.938577    1743 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1209 15:43:10.633178    1743 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1209 15:43:10.633380    1743 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/download-only-632000/config.json ...
	I1209 15:43:10.633397    1743 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/download-only-632000/config.json: {Name:mk306deaa9e300654af025aebb243664b8b97ba0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 15:43:10.633659    1743 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1209 15:43:10.633900    1743 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1209 15:43:11.168231    1743 out.go:193] 
	W1209 15:43:11.174233    1743 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20062-1231/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107f78320 0x107f78320 0x107f78320 0x107f78320 0x107f78320 0x107f78320 0x107f78320] Decompressors:map[bz2:0x14000697d70 gz:0x14000697d78 tar:0x14000697c50 tar.bz2:0x14000697cb0 tar.gz:0x14000697d00 tar.xz:0x14000697d30 tar.zst:0x14000697d60 tbz2:0x14000697cb0 tgz:0x14000697d00 txz:0x14000697d30 tzst:0x14000697d60 xz:0x14000697d90 zip:0x14000697da0 zst:0x14000697d98] Getters:map[file:0x1400198c560 http:0x140008ec6e0 https:0x140008ec730] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1209 15:43:11.174256    1743 out_reason.go:110] 
	W1209 15:43:11.183170    1743 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1209 15:43:11.187058    1743 out.go:193] 
	
	
	* The control-plane node download-only-632000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-632000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-632000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (10.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-063000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-063000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 : (10.307917417s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (10.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1209 15:43:21.903237    1742 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1209 15:43:21.903283    1742 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
--- PASS: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-063000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-063000: exit status 85 (79.538667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-632000 | jenkins | v1.34.0 | 09 Dec 24 15:42 PST |                     |
	|         | -p download-only-632000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Dec 24 15:43 PST | 09 Dec 24 15:43 PST |
	| delete  | -p download-only-632000        | download-only-632000 | jenkins | v1.34.0 | 09 Dec 24 15:43 PST | 09 Dec 24 15:43 PST |
	| start   | -o=json --download-only        | download-only-063000 | jenkins | v1.34.0 | 09 Dec 24 15:43 PST |                     |
	|         | -p download-only-063000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/09 15:43:11
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1209 15:43:11.627231    1769 out.go:345] Setting OutFile to fd 1 ...
	I1209 15:43:11.627389    1769 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 15:43:11.627393    1769 out.go:358] Setting ErrFile to fd 2...
	I1209 15:43:11.627396    1769 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 15:43:11.627537    1769 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 15:43:11.628675    1769 out.go:352] Setting JSON to true
	I1209 15:43:11.646405    1769 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":761,"bootTime":1733787030,"procs":529,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 15:43:11.646475    1769 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 15:43:11.651237    1769 out.go:97] [download-only-063000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 15:43:11.651298    1769 notify.go:220] Checking for updates...
	I1209 15:43:11.655253    1769 out.go:169] MINIKUBE_LOCATION=20062
	I1209 15:43:11.658320    1769 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 15:43:11.661204    1769 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 15:43:11.664205    1769 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 15:43:11.668280    1769 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	W1209 15:43:11.674223    1769 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1209 15:43:11.674421    1769 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 15:43:11.677162    1769 out.go:97] Using the qemu2 driver based on user configuration
	I1209 15:43:11.677171    1769 start.go:297] selected driver: qemu2
	I1209 15:43:11.677175    1769 start.go:901] validating driver "qemu2" against <nil>
	I1209 15:43:11.677225    1769 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1209 15:43:11.680226    1769 out.go:169] Automatically selected the socket_vmnet network
	I1209 15:43:11.685552    1769 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1209 15:43:11.685686    1769 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1209 15:43:11.685705    1769 cni.go:84] Creating CNI manager for ""
	I1209 15:43:11.685732    1769 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1209 15:43:11.685738    1769 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1209 15:43:11.685777    1769 start.go:340] cluster config:
	{Name:download-only-063000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-063000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 15:43:11.690103    1769 iso.go:125] acquiring lock: {Name:mkebdeccbfcce02dd4099f666d6e6725298cada2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1209 15:43:11.693323    1769 out.go:97] Starting "download-only-063000" primary control-plane node in "download-only-063000" cluster
	I1209 15:43:11.693332    1769 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 15:43:11.747651    1769 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 15:43:11.747668    1769 cache.go:56] Caching tarball of preloaded images
	I1209 15:43:11.747923    1769 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 15:43:11.753138    1769 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1209 15:43:11.753145    1769 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1209 15:43:11.844186    1769 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4?checksum=md5:5f3d7369b12f47138aa2863bb7bda916 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1209 15:43:20.180070    1769 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1209 15:43:20.180264    1769 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1209 15:43:20.702592    1769 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1209 15:43:20.702785    1769 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/download-only-063000/config.json ...
	I1209 15:43:20.702801    1769 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/download-only-063000/config.json: {Name:mke65a5b759106cfad8b1c6d0e85ddb33a280237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1209 15:43:20.703078    1769 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1209 15:43:20.703240    1769 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20062-1231/.minikube/cache/darwin/arm64/v1.31.2/kubectl
	
	
	* The control-plane node download-only-063000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-063000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-063000
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.37s)

                                                
                                                
=== RUN   TestBinaryMirror
I1209 15:43:22.444348    1742 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-688000 --alsologtostderr --binary-mirror http://127.0.0.1:49310 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-688000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-688000
--- PASS: TestBinaryMirror (0.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-188000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-188000: exit status 85 (62.021834ms)

                                                
                                                
-- stdout --
	* Profile "addons-188000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-188000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-188000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-188000: exit status 85 (64.761708ms)

                                                
                                                
-- stdout --
	* Profile "addons-188000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-188000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (197s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-188000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-188000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m16.996896833s)
--- PASS: TestAddons/Setup (197.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.93s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 7.403209ms
addons_test.go:807: volcano-scheduler stabilized in 7.463667ms
addons_test.go:815: volcano-admission stabilized in 7.513375ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-kz76q" [f68c289f-cf8d-4083-aa35-faf74f3c1aef] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003755s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-mm7cq" [380cc775-6c5b-456c-a511-5e8aac8e305c] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005617333s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-tlm89" [ba58779b-4c90-4e93-8408-6e63431c843b] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003596459s
addons_test.go:842: (dbg) Run:  kubectl --context addons-188000 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-188000 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-188000 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [5ab9ba80-5fb5-426d-8688-9e3c4098022a] Pending
helpers_test.go:344: "test-job-nginx-0" [5ab9ba80-5fb5-426d-8688-9e3c4098022a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [5ab9ba80-5fb5-426d-8688-9e3c4098022a] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.005236167s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-188000 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-188000 addons disable volcano --alsologtostderr -v=1: (10.694945s)
--- PASS: TestAddons/serial/Volcano (38.93s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-188000 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-188000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.36s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-188000 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-188000 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f8b5c933-d408-40de-9708-b7ea6218a479] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f8b5c933-d408-40de-9708-b7ea6218a479] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.004259875s
addons_test.go:633: (dbg) Run:  kubectl --context addons-188000 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-188000 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-188000 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-188000 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.36s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 1.455958ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-4prs7" [18831b9c-573a-4041-a6ae-66ad94bfa3aa] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.011304209s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-s97pv" [a93da306-cb5f-48c4-bae7-d082bc9c042c] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011207625s
addons_test.go:331: (dbg) Run:  kubectl --context addons-188000 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-188000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-188000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.973396167s)
addons_test.go:350: (dbg) Run:  out/minikube-darwin-arm64 -p addons-188000 ip
2024/12/09 15:47:49 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-188000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.31s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-188000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-188000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-188000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6f67a550-e77a-47f9-869d-82f16f55ba2a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6f67a550-e77a-47f9-869d-82f16f55ba2a] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.007199042s
I1209 15:49:05.873105    1742 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-darwin-arm64 -p addons-188000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-188000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-arm64 -p addons-188000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-188000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-188000 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-188000 addons disable ingress --alsologtostderr -v=1: (7.256277375s)
--- PASS: TestAddons/parallel/Ingress (18.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hw89m" [faf83ff2-2aa9-4d1a-9050-58cad1e53dc1] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.009343292s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-188000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-188000 addons disable inspektor-gadget --alsologtostderr -v=1: (5.278358792s)
--- PASS: TestAddons/parallel/InspektorGadget (11.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 1.35175ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-ljr2k" [9920ae2e-4b1a-4be4-bb51-bf81e08e2864] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006106291s
addons_test.go:402: (dbg) Run:  kubectl --context addons-188000 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-188000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1209 15:48:11.573650    1742 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1209 15:48:11.576234    1742 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1209 15:48:11.576241    1742 kapi.go:107] duration metric: took 2.616708ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 2.620125ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-188000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-188000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b8b0316c-26b8-41f0-985e-9d1b3a733441] Pending
helpers_test.go:344: "task-pv-pod" [b8b0316c-26b8-41f0-985e-9d1b3a733441] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b8b0316c-26b8-41f0-985e-9d1b3a733441] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.006063125s
addons_test.go:511: (dbg) Run:  kubectl --context addons-188000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-188000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-188000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-188000 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-188000 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-188000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-188000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7383b1f3-86c7-43ab-8fcf-32abb88c96fb] Pending
helpers_test.go:344: "task-pv-pod-restore" [7383b1f3-86c7-43ab-8fcf-32abb88c96fb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7383b1f3-86c7-43ab-8fcf-32abb88c96fb] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00649075s
addons_test.go:553: (dbg) Run:  kubectl --context addons-188000 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-188000 delete pod task-pv-pod-restore: (1.025487125s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-188000 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-188000 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-188000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-188000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-188000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.088583042s)
--- PASS: TestAddons/parallel/CSI (44.00s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-188000 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-tdh97" [bd95c7aa-a0ad-4949-9e8f-d65e3e319589] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-tdh97" [bd95c7aa-a0ad-4949-9e8f-d65e3e319589] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003995292s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-188000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-188000 addons disable headlamp --alsologtostderr -v=1: (5.215906334s)
--- PASS: TestAddons/parallel/Headlamp (16.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-m5c95" [85722071-fbde-4419-a29e-6b3a2ee67673] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004067375s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-188000 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.9s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-188000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-188000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-188000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [86428a81-5f09-46aa-9350-351b051ae8ec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [86428a81-5f09-46aa-9350-351b051ae8ec] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [86428a81-5f09-46aa-9350-351b051ae8ec] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004688417s
addons_test.go:906: (dbg) Run:  kubectl --context addons-188000 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-darwin-arm64 -p addons-188000 ssh "cat /opt/local-path-provisioner/pvc-716e9d07-5122-44cc-85ce-dcba3784be9b_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-188000 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-188000 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-188000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-188000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.394007416s)
--- PASS: TestAddons/parallel/LocalPath (51.90s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.15s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5ddlx" [a46d0e00-900c-4e86-86c6-332c5f133b11] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004692125s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-188000 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.15s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-q9ql4" [2e303bbb-7c83-4448-ace4-eec813131476] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00497975s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-188000 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-188000 addons disable yakd --alsologtostderr -v=1: (5.225280208s)
--- PASS: TestAddons/parallel/Yakd (10.23s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-188000
addons_test.go:170: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-188000: (12.237357958s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-188000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-188000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-188000
--- PASS: TestAddons/StoppedEnableDisable (12.43s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (14.21s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1209 16:44:10.476305    1742 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 16:44:10.476517    1742 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1209 16:44:12.458139    1742 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1209 16:44:12.458341    1742 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1209 16:44:12.458381    1742 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/001/docker-machine-driver-hyperkit
I1209 16:44:14.016031    1742 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1050a16e0 0x1050a16e0 0x1050a16e0 0x1050a16e0 0x1050a16e0 0x1050a16e0 0x1050a16e0] Decompressors:map[bz2:0x14000610160 gz:0x14000610168 tar:0x14000610110 tar.bz2:0x14000610120 tar.gz:0x14000610130 tar.xz:0x14000610140 tar.zst:0x14000610150 tbz2:0x14000610120 tgz:0x14000610130 txz:0x14000610140 tzst:0x14000610150 xz:0x14000610170 zip:0x14000610180 zst:0x14000610178] Getters:map[file:0x1400081b510 http:0x14001aa90e0 https:0x14001aa9130] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1209 16:44:14.016156    1742 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2287877910/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (14.21s)

                                                
                                    
x
+
TestErrorSpam/setup (36.25s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-812000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-812000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 --driver=qemu2 : (36.2544965s)
--- PASS: TestErrorSpam/setup (36.25s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.26s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 status
--- PASS: TestErrorSpam/status (0.26s)

                                                
                                    
x
+
TestErrorSpam/pause (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 pause
--- PASS: TestErrorSpam/pause (0.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 unpause
--- PASS: TestErrorSpam/unpause (0.60s)

                                                
                                    
x
+
TestErrorSpam/stop (64.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 stop: (12.209991333s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 stop: (26.03555525s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-812000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-812000 stop: (26.038554084s)
--- PASS: TestErrorSpam/stop (64.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/20062-1231/.minikube/files/etc/test/nested/copy/1742/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.72s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-121000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E1209 15:51:39.877625    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:51:39.885227    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:51:39.898592    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:51:39.921967    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:51:39.965386    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:51:40.048723    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:51:40.210438    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:51:40.533098    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:51:41.176514    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:51:42.460013    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:51:45.023469    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:51:50.146929    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-121000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (49.716112791s)
--- PASS: TestFunctional/serial/StartWithProxy (49.72s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.36s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1209 15:51:59.264080    1742 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-121000 --alsologtostderr -v=8
E1209 15:52:00.390402    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
E1209 15:52:20.873868    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-121000 --alsologtostderr -v=8: (38.362552167s)
functional_test.go:663: soft start took 38.362997042s for "functional-121000" cluster.
I1209 15:52:37.626641    1742 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (38.36s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-121000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-121000 cache add registry.k8s.io/pause:3.1: (1.195332625s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-121000 cache add registry.k8s.io/pause:3.3: (1.100861583s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-121000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1129976110/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 cache add minikube-local-cache-test:functional-121000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 cache delete minikube-local-cache-test:functional-121000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-121000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-121000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (73.707917ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 kubectl -- --context functional-121000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.80s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-121000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-121000 get pods: (1.167926417s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.95s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-121000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1209 15:53:01.837263    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-121000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.950539625s)
functional_test.go:761: restart took 37.950628s for "functional-121000" cluster.
I1209 15:53:22.887089    1742 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (37.95s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-121000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1817433744/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.67s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.59s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-121000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-121000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-121000: exit status 115 (150.201458ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30943 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-121000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-121000 delete -f testdata/invalidsvc.yaml: (1.344497542s)
--- PASS: TestFunctional/serial/InvalidService (4.59s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-121000 config get cpus: exit status 14 (33.706792ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-121000 config get cpus: exit status 14 (33.829542ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-121000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-121000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2595: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.72s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-121000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-121000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (121.672ms)

                                                
                                                
-- stdout --
	* [functional-121000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 15:54:16.126744    2580 out.go:345] Setting OutFile to fd 1 ...
	I1209 15:54:16.126941    2580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 15:54:16.126944    2580 out.go:358] Setting ErrFile to fd 2...
	I1209 15:54:16.126947    2580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 15:54:16.127098    2580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 15:54:16.128208    2580 out.go:352] Setting JSON to false
	I1209 15:54:16.145838    2580 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1426,"bootTime":1733787030,"procs":530,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 15:54:16.145913    2580 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 15:54:16.150807    2580 out.go:177] * [functional-121000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1209 15:54:16.158772    2580 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 15:54:16.158838    2580 notify.go:220] Checking for updates...
	I1209 15:54:16.165797    2580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 15:54:16.169694    2580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 15:54:16.172708    2580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 15:54:16.175772    2580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 15:54:16.178749    2580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 15:54:16.182060    2580 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 15:54:16.182330    2580 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 15:54:16.185711    2580 out.go:177] * Using the qemu2 driver based on existing profile
	I1209 15:54:16.192745    2580 start.go:297] selected driver: qemu2
	I1209 15:54:16.192752    2580 start.go:901] validating driver "qemu2" against &{Name:functional-121000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-121000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 15:54:16.192811    2580 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 15:54:16.198586    2580 out.go:201] 
	W1209 15:54:16.202701    2580 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1209 15:54:16.206721    2580 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-121000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-121000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-121000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (120.656708ms)

                                                
                                                
-- stdout --
	* [functional-121000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1209 15:54:16.001105    2576 out.go:345] Setting OutFile to fd 1 ...
	I1209 15:54:16.001256    2576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 15:54:16.001263    2576 out.go:358] Setting ErrFile to fd 2...
	I1209 15:54:16.001269    2576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1209 15:54:16.001394    2576 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
	I1209 15:54:16.002915    2576 out.go:352] Setting JSON to false
	I1209 15:54:16.022453    2576 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1426,"bootTime":1733787030,"procs":530,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1209 15:54:16.022543    2576 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1209 15:54:16.028799    2576 out.go:177] * [functional-121000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1209 15:54:16.036811    2576 out.go:177]   - MINIKUBE_LOCATION=20062
	I1209 15:54:16.036881    2576 notify.go:220] Checking for updates...
	I1209 15:54:16.043693    2576 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	I1209 15:54:16.047737    2576 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1209 15:54:16.050771    2576 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1209 15:54:16.053771    2576 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	I1209 15:54:16.056747    2576 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1209 15:54:16.060074    2576 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1209 15:54:16.060336    2576 driver.go:394] Setting default libvirt URI to qemu:///system
	I1209 15:54:16.063651    2576 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1209 15:54:16.070739    2576 start.go:297] selected driver: qemu2
	I1209 15:54:16.070747    2576 start.go:901] validating driver "qemu2" against &{Name:functional-121000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-121000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1209 15:54:16.070813    2576 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1209 15:54:16.076768    2576 out.go:201] 
	W1209 15:54:16.080761    2576 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1209 15:54:16.084570    2576 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [13e76bb2-2a8e-4fa4-9cde-73436ab61453] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009767042s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-121000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-121000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-121000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-121000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cfe39527-fb4f-458e-9c90-81faa0603606] Pending
helpers_test.go:344: "sp-pod" [cfe39527-fb4f-458e-9c90-81faa0603606] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cfe39527-fb4f-458e-9c90-81faa0603606] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.007393667s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-121000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-121000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-121000 delete -f testdata/storage-provisioner/pod.yaml: (1.193829125s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-121000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4c3c7bd4-8659-440f-b040-382f8a92aefd] Pending
helpers_test.go:344: "sp-pod" [4c3c7bd4-8659-440f-b040-382f8a92aefd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4c3c7bd4-8659-440f-b040-382f8a92aefd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010725708s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-121000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.73s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh -n functional-121000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 cp functional-121000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd103209686/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh -n functional-121000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh -n functional-121000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1742/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "sudo cat /etc/test/nested/copy/1742/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1742.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "sudo cat /etc/ssl/certs/1742.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1742.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "sudo cat /usr/share/ca-certificates/1742.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/17422.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "sudo cat /etc/ssl/certs/17422.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/17422.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "sudo cat /usr/share/ca-certificates/17422.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-121000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-121000 ssh "sudo systemctl is-active crio": exit status 1 (68.419584ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-121000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-121000
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-121000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-121000 image ls --format short --alsologtostderr:
I1209 15:54:23.595219    2621 out.go:345] Setting OutFile to fd 1 ...
I1209 15:54:23.595403    2621 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 15:54:23.595407    2621 out.go:358] Setting ErrFile to fd 2...
I1209 15:54:23.595409    2621 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 15:54:23.595544    2621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
I1209 15:54:23.595967    2621 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 15:54:23.596025    2621 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 15:54:23.596861    2621 ssh_runner.go:195] Run: systemctl --version
I1209 15:54:23.596869    2621 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/functional-121000/id_rsa Username:docker}
I1209 15:54:23.628176    2621 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image ls --format table --alsologtostderr
E1209 15:54:23.760384    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-121000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/kube-apiserver              | v1.31.2           | f9c26480f1e72 | 91.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.2           | d6b061e73ae45 | 66MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/minikube-local-cache-test | functional-121000 | c0c07426fabd6 | 30B    |
| docker.io/library/nginx                     | alpine            | dba92e6b64886 | 56.9MB |
| docker.io/kicbase/echo-server               | functional-121000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | latest            | bdf62fd3a32f1 | 197MB  |
| registry.k8s.io/kube-proxy                  | v1.31.2           | 021d242013305 | 94.7MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.2           | 9404aea098d9e | 85.9MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-121000 image ls --format table --alsologtostderr:
I1209 15:54:23.765210    2625 out.go:345] Setting OutFile to fd 1 ...
I1209 15:54:23.765391    2625 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 15:54:23.765395    2625 out.go:358] Setting ErrFile to fd 2...
I1209 15:54:23.765397    2625 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 15:54:23.765530    2625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
I1209 15:54:23.765948    2625 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 15:54:23.766009    2625 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 15:54:23.766863    2625 ssh_runner.go:195] Run: systemctl --version
I1209 15:54:23.766872    2625 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/functional-121000/id_rsa Username:docker}
I1209 15:54:23.798630    2625 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-121000 image ls --format json --alsologtostderr:
[{"id":"f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"91600000"},{"id":"021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"94700000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-121000"],"size":"4780000"},{"id":"8057e0500773a37cde2cff041eb
13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"197000000"},{"id":"dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f","repoDigests":[],"repoTags":["docker.io/lib
rary/nginx:alpine"],"size":"56900000"},{"id":"d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"66000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"c0c07426fabd6adb5cc8766b5b4430d2366a184c02257729ec9b25aaa1be2743","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-121000"],"size":"30"},{"id":"9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"85900000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-121000 image ls --format json --alsologtostderr:
I1209 15:54:23.682303    2623 out.go:345] Setting OutFile to fd 1 ...
I1209 15:54:23.682471    2623 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 15:54:23.682477    2623 out.go:358] Setting ErrFile to fd 2...
I1209 15:54:23.682479    2623 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 15:54:23.682598    2623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
I1209 15:54:23.683036    2623 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 15:54:23.683100    2623 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 15:54:23.683990    2623 ssh_runner.go:195] Run: systemctl --version
I1209 15:54:23.684001    2623 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/functional-121000/id_rsa Username:docker}
I1209 15:54:23.713983    2623 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-121000 image ls --format yaml --alsologtostderr:
- id: dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "56900000"
- id: f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "91600000"
- id: d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "66000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "85900000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "94700000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: c0c07426fabd6adb5cc8766b5b4430d2366a184c02257729ec9b25aaa1be2743
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-121000
size: "30"
- id: bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "197000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-121000
size: "4780000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-121000 image ls --format yaml --alsologtostderr:
I1209 15:54:23.506467    2619 out.go:345] Setting OutFile to fd 1 ...
I1209 15:54:23.506888    2619 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 15:54:23.506893    2619 out.go:358] Setting ErrFile to fd 2...
I1209 15:54:23.506895    2619 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 15:54:23.507305    2619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
I1209 15:54:23.508264    2619 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 15:54:23.508384    2619 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 15:54:23.509219    2619 ssh_runner.go:195] Run: systemctl --version
I1209 15:54:23.509227    2619 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/functional-121000/id_rsa Username:docker}
I1209 15:54:23.539815    2619 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-121000 ssh pgrep buildkitd: exit status 1 (66.410333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image build -t localhost/my-image:functional-121000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-121000 image build -t localhost/my-image:functional-121000 testdata/build --alsologtostderr: (1.650051s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-121000 image build -t localhost/my-image:functional-121000 testdata/build --alsologtostderr:
I1209 15:54:23.916504    2629 out.go:345] Setting OutFile to fd 1 ...
I1209 15:54:23.916786    2629 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 15:54:23.916789    2629 out.go:358] Setting ErrFile to fd 2...
I1209 15:54:23.916792    2629 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 15:54:23.916925    2629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20062-1231/.minikube/bin
I1209 15:54:23.917381    2629 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 15:54:23.918163    2629 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1209 15:54:23.919123    2629 ssh_runner.go:195] Run: systemctl --version
I1209 15:54:23.919132    2629 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/20062-1231/.minikube/machines/functional-121000/id_rsa Username:docker}
I1209 15:54:23.946515    2629 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.504947977.tar
I1209 15:54:23.946606    2629 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1209 15:54:23.950643    2629 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.504947977.tar
I1209 15:54:23.952627    2629 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.504947977.tar: stat -c "%s %y" /var/lib/minikube/build/build.504947977.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.504947977.tar': No such file or directory
I1209 15:54:23.952638    2629 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.504947977.tar --> /var/lib/minikube/build/build.504947977.tar (3072 bytes)
I1209 15:54:23.961809    2629 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.504947977
I1209 15:54:23.965306    2629 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.504947977 -xf /var/lib/minikube/build/build.504947977.tar
I1209 15:54:23.968774    2629 docker.go:360] Building image: /var/lib/minikube/build/build.504947977
I1209 15:54:23.968829    2629 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-121000 /var/lib/minikube/build/build.504947977
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:12d50eca8d20c74a18d13f1d0a88abaae6e9bd4e426eb134db5536a2ac8a6707 done
#8 naming to localhost/my-image:functional-121000 done
#8 DONE 0.0s
I1209 15:54:25.517606    2629 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-121000 /var/lib/minikube/build/build.504947977: (1.548761792s)
I1209 15:54:25.517702    2629 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.504947977
I1209 15:54:25.521577    2629 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.504947977.tar
I1209 15:54:25.524711    2629 build_images.go:217] Built localhost/my-image:functional-121000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.504947977.tar
I1209 15:54:25.524730    2629 build_images.go:133] succeeded building to: functional-121000
I1209 15:54:25.524733    2629 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image ls
2024/12/09 15:54:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.720219208s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-121000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-121000 docker-env) && out/minikube-darwin-arm64 status -p functional-121000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-121000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-121000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-121000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-121000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-121000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2373: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image load --daemon kicbase/echo-server:functional-121000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-121000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-121000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [2f35bc01-c39e-46f4-a2c1-100cdd0449b0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [2f35bc01-c39e-46f4-a2c1-100cdd0449b0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003548875s
I1209 15:53:42.031220    1742 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image load --daemon kicbase/echo-server:functional-121000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-121000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image load --daemon kicbase/echo-server:functional-121000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image save kicbase/echo-server:functional-121000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image rm kicbase/echo-server:functional-121000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-121000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 image save --daemon kicbase/echo-server:functional-121000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-121000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-121000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.233.140 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1209 15:53:42.097363    1742 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1209 15:53:42.142250    1742 config.go:182] Loaded profile config "functional-121000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-121000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-121000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-121000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-g9mlh" [53195f68-66ca-4017-90d4-c23e6e169cfa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-g9mlh" [53195f68-66ca-4017-90d4-c23e6e169cfa] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.008577875s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 service list -o json
functional_test.go:1494: Took "310.682834ms" to run "out/minikube-darwin-arm64 -p functional-121000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:32190
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:32190
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "105.221292ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "39.734ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "103.992666ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "39.324834ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-121000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port142237890/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733788446863898000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port142237890/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733788446863898000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port142237890/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733788446863898000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port142237890/001/test-1733788446863898000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-121000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (70.695583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 15:54:06.935115    1742 retry.go:31] will retry after 324.933373ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  9 23:54 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  9 23:54 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  9 23:54 test-1733788446863898000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh cat /mount-9p/test-1733788446863898000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-121000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d57622ac-d71d-4a27-a0ba-aabf0f342fab] Pending
helpers_test.go:344: "busybox-mount" [d57622ac-d71d-4a27-a0ba-aabf0f342fab] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d57622ac-d71d-4a27-a0ba-aabf0f342fab] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d57622ac-d71d-4a27-a0ba-aabf0f342fab] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004236541s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-121000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-121000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port142237890/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-121000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2908044062/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-121000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (69.029167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 15:54:13.024282    1742 retry.go:31] will retry after 397.230507ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-121000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2908044062/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-121000 ssh "sudo umount -f /mount-9p": exit status 1 (67.970334ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-121000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-121000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2908044062/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-121000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3252502787/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-121000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3252502787/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-121000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3252502787/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-121000 ssh "findmnt -T" /mount1: exit status 1 (84.47025ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 15:54:13.972015    1742 retry.go:31] will retry after 632.302758ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-121000 ssh "findmnt -T" /mount2: exit status 1 (63.709917ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1209 15:54:14.772520    1742 retry.go:31] will retry after 653.326129ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-121000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-121000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-121000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3252502787/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-121000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3252502787/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-121000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3252502787/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-121000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-121000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-121000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-677000 status --output json -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/CopyFile (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (34.23s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-295000 --driver=qemu2 
E1209 16:24:43.015305    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/addons-188000/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-295000 --driver=qemu2 : (34.230453083s)
--- PASS: TestImageBuild/serial/Setup (34.23s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-295000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-295000: (1.334191292s)
--- PASS: TestImageBuild/serial/NormalBuild (1.33s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.44s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-295000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.44s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-295000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.33s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-295000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.31s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.06s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-256000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-256000 --output=json --user=testUser: (7.060544375s)
--- PASS: TestJSONOutput/stop/Command (7.06s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-043000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-043000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (102.7395ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4c9410a2-547a-4743-9311-2ce26924e764","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-043000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6016b1cf-440d-4ee0-a765-5ed175d1ab4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20062"}}
	{"specversion":"1.0","id":"fcc9dd0c-0d06-4b22-a714-1a30223f2e9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig"}}
	{"specversion":"1.0","id":"01c73a34-2105-46a2-8c29-3026a10a2e90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"130ebc00-7e5a-4c17-b14b-62c0f384e0ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e9e3f1cd-6f55-4451-8e92-20004a2ad464","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube"}}
	{"specversion":"1.0","id":"ccdee755-2164-4d30-b27b-b19d0c76f50c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"91102305-1009-4906-915b-ae3a1606ee67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-043000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-043000
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-507000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-507000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (107.711083ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-507000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20062
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20062-1231/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20062-1231/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-507000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-507000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.637042ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-507000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-507000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.680556958s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.831700041s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-507000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-507000: (3.176893s)
--- PASS: TestNoKubernetes/serial/Stop (3.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-507000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-507000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.576208ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-507000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-507000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-632000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-493000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-493000 --alsologtostderr -v=3: (3.564678292s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-493000 -n old-k8s-version-493000: exit status 7 (55.307417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-493000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-558000 --alsologtostderr -v=3
E1209 16:58:30.965048    1742 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/20062-1231/.minikube/profiles/functional-121000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-558000 --alsologtostderr -v=3: (2.865674125s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-558000 -n no-preload-558000: exit status 7 (62.328042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-558000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-877000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-877000 --alsologtostderr -v=3: (3.885175583s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-877000 -n default-k8s-diff-port-877000: exit status 7 (56.88875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-877000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-757000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-757000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-757000 --alsologtostderr -v=3: (3.167649709s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-757000 -n newest-cni-757000: exit status 7 (61.626958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-757000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-583000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-583000 --alsologtostderr -v=3: (3.421984625s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-583000 -n embed-certs-583000: exit status 7 (60.727375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-583000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-884000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-884000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-884000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-884000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-884000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-884000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-884000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-884000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-884000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-884000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-884000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-884000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-884000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-884000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-884000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-884000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-884000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-884000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-884000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-884000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-884000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-884000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-884000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-884000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-884000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-884000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-884000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-884000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-884000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-884000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-884000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-884000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-884000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-884000"

                                                
                                                
----------------------- debugLogs end: cilium-884000 [took: 2.362382292s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-884000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-884000
--- SKIP: TestNetworkPlugins/group/cilium (2.48s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-649000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-649000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
Copied to clipboard